US6298144B1 - Device for and method of detecting motion in an image - Google Patents

Device for and method of detecting motion in an image Download PDF

Info

Publication number
US6298144B1
US6298144B1 US09/081,608 US8160898A US6298144B1 US 6298144 B1 US6298144 B1 US 6298144B1 US 8160898 A US8160898 A US 8160898A US 6298144 B1 US6298144 B1 US 6298144B1
Authority
US
United States
Prior art keywords
image
test
output
reference image
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
US09/081,608
Inventor
II Leonard G. Pucker
David B. Sofge
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NATIONAL SECURITY AGENCY GOVERNMENT OF United States, AS REPRESENTED BY
National Security Agency
Original Assignee
National Security Agency
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Security Agency filed Critical National Security Agency
Priority to US09/081,608 priority Critical patent/US6298144B1/en
Assigned to NATIONAL SECURITY AGENCY, GOVERNMENT OF THE UNITED STATES OF AMERICA AS REPRESENTED BY, THE reassignment NATIONAL SECURITY AGENCY, GOVERNMENT OF THE UNITED STATES OF AMERICA AS REPRESENTED BY, THE ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PUCKER, LEONARD G., II, SOFGE, DAVID B.
Application granted granted Critical
Publication of US6298144B1 publication Critical patent/US6298144B1/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/147Scene change detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection

Definitions

  • This invention relates to image analysis and, more particularly, to the detection of motion that may have occurred between subsequent frames of an image.
  • the ability to detect motion that may have occurred between subsequent frames, or images, in a video of a scene is useful in applications such as machine vision, motion compensation, and data compression.
  • the most popular methods of motion detection and estimation utilize a “block matching” method.
  • a first image of a scene i.e., a reference image
  • a second image i.e., a test image
  • N ⁇ M blocks are segmented into N ⁇ M blocks.
  • Each block is comprised of pixels.
  • Corresponding blocks from the reference image and the test image are compared mathematically using some matching method.
  • Corresponding pixels in corresponding blocks are called pixel pairs.
  • Popular matching methods presently being used include a “mean of the absolute differences” (MAD) method and a “mean squared error” (MSE) method.
  • MAD computes the sum of the absolute value of the difference of each pixel pair of the corresponding blocks in order to generate a score.
  • MSE computes the sum of the square of the difference of each pixel pair of the corresponding blocks.
  • the pixel pairs will cancel when differenced in both MAD and MSE and a low score will result. If sufficient motion has occurred then the pixel pair won't cancel and a high score will result. The score is then compared to a threshold to determine whether or not motion has occurred between the reference image and the test image.
  • test image is discarded and a new test image is acquired.
  • the steps described above are then repeated using the new test image in order to determine if motion has occurred between the reference image and the new test image.
  • the reference image is discarded, the test image becomes the new reference image, a new test image is acquired, and the steps described above are repeated using the new reference image and the new test image to determine whether or not motion has occurred between the new reference image and the new test image.
  • MAD and MSE are easily implemented, but tend to perform badly when gaussian noise is present in two images that should not otherwise indicate that any motion has occurred. Gaussian noise occurs frequently in sonograms and radar images. One reason why MAD and MSE may falsely indicate motion when there isn't any if gaussian noise is present in the images is that gaussian noise in the reference frame may not be related at all to gaussian noise in the test image. So, the two gaussian noise images may not cancel when differenced. Gaussian noise minus gaussian noise results in gaussian noise. If two images should not indicate any motion but contain gaussian noise then MAD and MSE will assign a high score to the gaussian noise result instead of assigning a low score as expected.
  • U.S. Pat. No. 3,641,257 entitled “NOISE SUPPRESSOR FOR SURVEILLANCE AND INTRUSION-DETECTING SYSTEM,” discloses a correlation method for dealing with noise in a surveillance and intrusion detection system. Correlation is insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images (i.e., strong gaussian noise).
  • U.S. Pat. No. 3,641,257 is hereby incorporated by reference into the specification of the present invention.
  • U.S. Pat. No. 4,894,716, entitled “T.V. MOTION DETECTOR WITH FALSE ALARM IMMUNITY,” discloses the use of a low-pass filter and, mainly, a phase-detector to guard against noise-induced false detection of motion. Phase detection is insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images.
  • U.S. Pat. No. 4,894,716 is hereby incorporated by reference into the specification of the present invention.
  • U.S. Pat. No. 5,111,511 entitled “IMAGE MOTION VECTOR DETECTING APPARATUS,” discloses the use of both a low-pass filter and a high-pass filter to guard against noise-induced false detection of motion. Using just a low-pass filter and a high-pass filter is insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images.
  • U.S. Pat. No. 5,111,511 is hereby incorporated by reference into the specification of the present invention.
  • U.S. Pat. No. 5,157,732 entitled “MOTION VECTOR DETECTOR EMPLOYING IMAGE SUBREGIONS AND MEDIAN VALUES,” discloses the use of a median filter to guard against noise-induced false detection of motion. Median filtering by itself is insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images. U.S. Pat. No. 5,157,732 is hereby incorporated by reference into the specification of the present invention.
  • U.S. Pat. No. 5,365,603 entitled “METHOD FOR ANALYZING MOVEMENTS IN TEMPORAL SEQUENCES OF DIGITAL IMAGES,” discloses the use of a time-recursive filter to guard against noise-induced false detection of motion. Using just a time-recursive filter is insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images.
  • U.S. Pat. No. 5,365,603 is hereby incorporated by reference into the specification of the present invention.
  • U.S. Pat. No. 5,535,302 entitled “METHOD AND APPARATUS FOR DETERMINING IMAGE AFFINE FLOW USING ARTIFICIAL NEURAL SYSTEM WITH SIMPLE CELLS AND LIE GERMS,” discloses the use of least square error fitting circuit based on a Lie group model to guard against noise-induced false detection of motion. Using just this matching method is insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images.
  • U.S. Pat. No. 5,535,302 is hereby incorporated by reference into the specification of the present invention.
  • U.S. Pat. No. 5,543,858 entitled “METHOD OF AND APPARATUS FOR REDUCING NOISE IN A MOTION DETECTOR SIGNAL,” discloses the use of recursive filtering (or low-pass filtering) to guard against noise-induced false detection of motion. Using just recursive filtering is insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images.
  • U.S. Pat. No. 5,543,858 is hereby incorporated by reference into the specification of the present invention.
  • U.S. Pat. No. 5,548,659 entitled “METHOD AND APPARATUS FOR DETECTING CHANGES IN DYNAMIC IMAGES,” discloses the use of a noise model based on the input images and the difference of the input images to guard against noise-induced false detection of motion.
  • a noise model is insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images because gaussian noise does not cancel upon differencing with different gaussian noise but results in gaussian noise that may induce a false detection of motion.
  • U.S. Pat. No. 5,548,659 is hereby incorporated by reference into the specification of the present invention.
  • U.S. Pat. No. 5,586,202 entitled “MOTION DETECTING APPARATUS,” does not address noise-induced false detection of motion, and is insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images.
  • U.S. Pat. No. 5,586,202 is hereby incorporated by reference into the specification of the present invention.
  • U.S. Pat. No. 5,588,067 entitled “MOTION DETECTION AND IMAGE ACQUISITION APPARATUS AND METHOD OF DETECTING THE MOTION OF AND ACQUIRING AN IMAGE OF AN OBJECT,” discloses the use of a correlation function to guard against noise-induced false detection of motion.
  • a correlation function is insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images.
  • U.S. Pat. No. 5,588,067 is hereby incorporated by reference into the specification of the present invention.
  • U.S. Pat. No. 5,589,884 entitled “ADAPTIVE QUANTIZATION CONTROLLED BY SCENE CHANGE DETECTION,” does not address noise-induced false detection of motion and is, therefore, insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images.
  • U.S. Pat. No. 5,589,884 is hereby incorporated by reference into the specification of the present invention.
  • U.S. Pat. No. 5,600,731 entitled “METHOD FOR TEMPORALLY ADAPTIVE FILTERING OF FRAMES OF A NOISY IMAGE SEQUENCE USING MOTION ESTIMATION,” discloses the use of linear minimum mean square error matching to guard against noise-induced false detection of motion. Using such a matching method is insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images.
  • U.S. Pat. No. 5,600,731 is hereby incorporated by reference into the specification of the present invention.
  • U.S. Pat. No. 5,627,586, entitled “MOVING BODY DETECTION DEVICE OF CAMERA,” does not address noise-induced false detection of motion and is, therefore, insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images.
  • U.S. Pat. No. 5,627,586 is hereby incorporated by reference into the specification of the present invention.
  • MAD mean of the absolute differences
  • the present invention is a device for and a method of detecting motion in subsequent images of a scene in the presence of gaussian noise that would normally occur in a sonogram or a radar image.
  • a first image, or reference image is acquired from a scene.
  • the reference image is comprised of pixels.
  • a second image, or test image is acquired from the scene.
  • the test image is also comprised of pixels.
  • the reference image and the test image are aligned to each other in order to eliminate one source of false motion detection.
  • the reference image and the test image are each divided into N ⁇ M blocks.
  • the user may define areas of the reference image and the test image to ignore or mask.
  • Corresponding blocks of the reference image and the test image that are to be considered are differenced (i.e., a pixel in one block is subtracted from the corresponding pixel in the corresponding block of the other image).
  • each block of the difference between the reference image and the test image is median filtered.
  • each median filtered block is low-pass filtered.
  • a normalized histogram is generated for each low-pass-filtered block.
  • a model of gaussian noise is generated.
  • a distance is calculated between each normalized histogram and the noise model.
  • Each calculated distance is compared to a user-defined threshold.
  • the user may define motion as having occurred if one calculated distance is at, or above, the user-definable threshold or if a user-definable number of calculated distances representing adjoining difference blocks of the reference image and the test image are at, or above, the user-definable threshold.
  • the latter approach is preferred in the present invention.
  • the reference image is discarded, the test image is designated as the new reference image, and a new test image of the scene is acquired.
  • FIG. 1 is a list of steps of the method of the present invention.
  • FIG. 2 is a block diagram of the device of the present invention.
  • the present invention is a device for and a method of detecting motion between subsequent images of a scene.
  • the present invention detects motion in the presence of gaussian noise that is commonly present in sonograms and radar images (i.e., a high level of gaussian noise or strong gaussian noise).
  • FIG. 1 is a list of the steps of the method of the present invention.
  • the first step in the method is to acquire a first image (hereinafter called a reference image) of the scene in which a user would like to detect if any motion has occurred.
  • the reference image is comprised of pixels and is used to compare subsequent images of the scene in order to determine whether or not motion has occurred in the scene.
  • the second step in the method is to acquire a second image (hereinafter called a test image) of the scene.
  • the test image is also comprised of pixels.
  • the test image will be compared to the reference image in order to determine whether or not motion has occurred between the reference image and the test image.
  • the third step in the method is to align the reference image and the test image. This step is necessary to reduce false detection of motion which may be caused by camera panning or line jitter. Alignment may be done horizontally or vertically. In the preferred embodiment, alignment is done horizontally only.
  • the fourth step in the method is to divide the aligned reference image and the test image each into N ⁇ M blocks.
  • the user may define the block size. Corresponding blocks in the reference image and the test image will later be compared in order to determine whether or not motion has occurred between the reference image and the test image.
  • the fifth step in the method is to allow the user to define those areas of the reference image, and corresponding areas in the test image, to ignore or mask. This allows a scene to be monitored that includes known motion that is of no interest (e.g., swaying trees, water fountains, etc.).
  • the sixth step in the method is to difference corresponding non-masked blocks in the reference image and the test image. That is, each non-masked block of the test image is subtracted from its corresponding block in the reference image on a pixel by pixel basis. If the two images for a particular block are identical then this step will result in no image data remaining.
  • Two images for a particular block may be identical in two ways.
  • the first way is that the two images are identical with no noise present in either image.
  • the second way is that the two pictorially identical images include noise that is identical in each image.
  • the two images for a particular block may be different in two ways.
  • the first way is that motion between the two images causes the two images to be pictorially different.
  • the second way is that the two images are pictorially identical but noise contained in one image may be different from noise contained in the other image. This second way may cause a false determination that motion has occurred between the two images.
  • Gaussian noise tends to be different from one image to the next.
  • the probability that two images containing gaussian noise contain the exact same distribution of gaussian noise approaches zero exponentially as the number of pixels in the two images increases.
  • the term “gaussian noise” means that the components of the noise has a gaussian distribution. That is, most of the components of the noise have an average value while the noise components that have values higher or lower than the average value are fewer in number and these values decrease the further away the component is from the average. In other words, the noise components form a bell-shaped curve. Just because two images contain the same bell-shaped distribution of noise components does not mean that the noise components occur in the same locations from one image to the next. Most likely, noise components do not occur in the same location from one image to the next.
  • the sixth step of the method results in data that looks like gaussian noise. If little, or no, noise is present then the result of the sixth step may be little, or no, data. If the images are pictorially different (i.e.,, motion has occurred between the images) then the sixth step of the method, in the presence of much or little noise, results in data that has a distribution that is less gaussian.
  • the sixth step of the method is used to extract the differences between corresponding blocks of the reference image and the test image.
  • the more gaussian is the distribution of the difference between a block of the reference image and its corresponding block in the test image the less likely that motion has occurred in that block.
  • the less gaussian is the distribution of the difference between a block of the reference image and its corresponding block in the test image the more likely that motion has occurred in that block.
  • the result of the sixth step is a set of at most N ⁇ M blocks that contain the differences between the reference image and the test image.
  • the seventh step in the method is to median filter each of the blocks resulting from the sixth step.
  • Median filtering tends to eliminate shot, or spike, noise.
  • To median filter data is to take the data point closest to the middle of the data as the value for each of the data points. For example, if three data points have values (5,10,5) then median filtering this data would result in the data being recognized as (5,5,5). The spike is totally eliminated and is not averaged into the other data.
  • the result of the seventh step is at most N ⁇ M median filtered blocks.
  • the eighth step in the method is to low-pass filter each of the blocks resulting from the seventh step. Low-pass filtering smears any abrupt changes in an image that may be caused by jitter in the acquisition process (i.e., camera jitter). Without the eighth step, camera jitter of a scene that does not include any motion might be falsely determined to include motion.
  • the result of the eighth step is at most N ⁇ M low-pass filtered blocks.
  • the ninth step in the method is to generate a normalized histogram for each of the blocks resulting from the eighth step.
  • Each data point in a normalized histogram has a value (e.g., histogram_vector i ).
  • histogram_vector i the more gaussian is the block of the difference between a block of the reference image and its corresponding block in the test image the less likely motion has occurred in this difference block, and vice versa.
  • a normalized histogram is useful for determining whether or not data is gaussian.
  • the tenth step in the method is to generate a model of gaussian noise.
  • the noise model is in the form of a probability density function, where each point in the noise model has a value (e.g., noise_pdf i ).
  • the model is not generated from the reference image or the test image.
  • the model is a purely mathematical description of gaussian noise.
  • the model may be based on empirical data of similar scenes (e.g., sonograms, radar images, etc.) but is not derived from the scene presently being monitored. This model will be compared to each normalized histogram of step nine in order to determine whether or not motion has occurred in the block represented by the normalized histogram.
  • the eleventh step in the method is to calculate the distance between each normalized histogram generated in step nine and the noise model of step ten.
  • Suitable measures of distance include the Kullback-Leibler “goodness of fit” test, the chi-square test, the Kolmogorov-Smirnov test, and the Anderson-Darling test.
  • L is the number of histogram bins
  • block_size is the size of the block in pixels.
  • the twelfth step in the method is to determine whether or not each of the distances calculated in step eleven is at or above a user-definable threshold.
  • the thirteenth step in the method is to determine if motion has occurred between the reference image and the test image. There are at least two ways to do this. First, if any distance calculation is at, or above, the user-definable threshold then it is determined that motion has occurred between the reference image and the test image. Second, if a user-definable number of distance calculations representing adjoining difference blocks of the reference image and the test image are at, or above, the user-definable threshold then it is determined that motion has occurred between the reference image and the test image. The second latter approach is the preferred step of the present invention for determining whether or not motion has occurred between the reference image and the test image.
  • step thirteen To continuously monitor the scene in order to determine whether or not motion has occurred in the scene, additional steps need to be taken.
  • the additional steps are determined by the result of step thirteen.
  • step thirteen the fourteenth step of the method is to discard the test image, acquire a new test image of the scene, and return to step three for additional processing to determine whether or not motion has occurred between the reference image and the new test image.
  • step thirteen the fourteenth step of the method is to discard the reference image, rename the test image as the new reference image, acquire a new test image, and return to step three for additional processing in order to determine if motion has occurred between the new reference image and the new test image.
  • FIG. 2 is a block diagram 20 of the device of the present invention.
  • a first, or reference, image of a scene is acquired using an image grabber 21 along an input 22 to the image grabber 21 .
  • Image grabbers are commercially available. Alternatively, an image may also be acquired from an electronic file.
  • a second image, or test image, of the scene is also acquired along input 22 of the image grabber 21 .
  • the reference image and the test image are provided along outputs 23 and 24 , respectively, of the image grabber 21 to an image aligner 25 .
  • Image aligners are commercially available.
  • the reference image and the test image must be aligned in order to reduce false detection of motion which may be caused by camera panning or line jitter during image acquisition. Alignment may be done horizontally, vertically, or a combination thereof. In the preferred embodiment, alignment is done horizontally only.
  • the aligned images of the reference image and the test image appear at the outputs 26 and 27 , respectively, of the image aligner 25 .
  • the outputs 26 and 27 of the image aligner 25 are connected to a block divider 28 .
  • the block divider 28 divides the reference image and the test image into N ⁇ M blocks. The user may define the block size.
  • the divided reference image and the divided test image appear at the outputs 29 and 30 , respectively, of the block divider 28 .
  • the user may ignore, or mask, certain areas of the divided reference image and divided test image.
  • Masking may be necessary if motion is normally present in the images and should not be used to determine whether or not motion has occurred between the images. For example, wind may cause a tree that is present in the images to sway or the a waterfall may be present in the images. Such motions are not the type of motion that a user is typically looking for. Therefore, it would be convenient to mask such motion.
  • Masking may be accomplished in the present invention by connecting the outputs 29 and 30 to a masker 31 .
  • the masker 31 may be a commercially digital signal processor that may be commanded to ignore certain areas of the signals provided thereto.
  • the user may specify the areas that should be masked via a control signal along an input 32 to the masker 31 .
  • the masked, or unmasked, reference image and test image appears at the outputs 33 and 34 , respectively, of the masker 31 .
  • the outputs 33 and 34 of the masker 31 are differenced using a differencer (or subtractor) 35 .
  • Differencers or subtractors, are known by those skilled in the art.
  • the differencer 35 subtracts corresponding blocks between the reference image and the test image. The difference between the reference image and the test image appears at the output 36 of the differencer 35 .
  • the output 36 of the differencer 35 is connected to a median filter 37 .
  • the median filter 37 eliminates shot, or spike, noise from each output 36 of the differencer 35 . That is, the difference between each block of the reference image and its corresponding test image is median filtered.
  • the output of the median filter 38 is connected to a low-pass filter 39 .
  • the low-pass filter 39 smears any abrupt changes in a difference block that may be caused by jitter in the acquisition process (i.e., camera jitter). That is, each median-filtered difference block is low-pass filtered. Without this smearing, camera jitter of a scene that does not include any motion might be falsely determined to include motion.
  • the low-pass filter 39 may be realized by a commercially available two-dimensional infinite impulse response (IIR) filter.
  • the output 40 of the low-pass filter 39 is connected to a histogram generator 41 .
  • the histogram generator 41 produces a normalized histogram for each of the difference blocks put out by the low-pass filter 39 . Normalized histogram generators are known by those skilled in the art.
  • the output 42 of the histogram generator 41 is connected to a distance calculator 43 .
  • a model of gaussian noise is also provided to the distance calculator along an input 44 to the distance calculator 43 .
  • the distance calculator 43 calculates the distance between the noise model and each normalized histogram put out by the histogram generator 41 . The closer a normalized histogram is to the noise model the more gaussian is the normalized histogram and the more likely that no motion has occurred between the corresponding blocks of the reference image and the test image represented by the normalized histogram, and vice versa.
  • the distance calculator 43 may implement the Kullback-Leibler “goodness of fit” test, the chi-square test, the Kolmogorov-Smirnov test, or the Anderson-Darling test.
  • the Kullback-Leibler “goodness of fit” test which is described above, is implemented in the distance calculator.
  • Any suitable computation device e.g., central processing unit
  • Any suitable computation device that is available commercially may be programmed to compute one of the distance calculations mentioned above.
  • the output 45 of the distance calculator 43 is connected to a comparator 46 .
  • a user-definable threshold is applied along an input 47 to the comparator 46 .
  • Any suitable commercial comparator may be used for the comparator 46 .
  • the comparator 46 compares each distance calculation put out by the distance calculator 43 to the user-definable threshold. If an output 45 of the distance calculator 43 , which is a measure of how gaussian is one difference block, is equal to or greater than the user-definable threshold then the difference block represented by the output 45 is determined to be gaussian.
  • the comparator 46 may put out a logic one level at its output 48 to indicate that a particular input to the comparator 46 is determined to be gaussian and put out a logic zero at its output 48 if the particular input is determined to be non-gaussian.
  • the output 48 of the comparator 46 is connected to a controller 49 .
  • the determination whether or not each difference block is gaussian is passed from the comparator 46 to the controller 49 .
  • a non-gaussian determination for all of the difference blocks indicates that motion has not occurred between the reference image and the test image.
  • motion between the reference image and the test image may be determined if one difference block is determined to be gaussian or a user-definable number of adjoining difference blocks are determined to be gaussian.
  • a user-definable number of adjoining difference blocks must be determined to be gaussian before motion is determined to have occurred between the reference image and the test image.
  • An input 50 to the controller 49 allows the user to indicate how many adjoining difference blocks (e.g., N) must be determined to be gaussian before motion is determined.
  • the output 51 of the controller 49 indicates whether or not motion has occurred between the reference image and the test image. For example, a logic one output may indicate motion and a logic zero output may indicate no motion.
  • the output 51 of the controller 49 is used to determine what images should be used for the next reference image and the next test image.
  • the output 51 is connected to the image grabber 21 in order to control what images are used for the next reference image and the next test image. If the output 51 of the controller indicates that no motion has occurred between the original reference image and the original test image then the present reference image will be used for the next reference image and a new test image will be acquired by the image grabber 21 . The new test image will then be compared to the original reference image, as described above, to determine whether or not motion has occurred between these images.
  • the original test image is used as the next reference image and a new test image is acquired by the image grabber 21 .
  • the new test image will then be compared to the new reference image, as described above, to determine whether or not motion has occurred between these images.
  • This process may be repeated for as long as the user wishes to monitor the scene.

Abstract

A device for and method of detecting motion between a reference image and a test image by acquiring the images; aligning the images; dividing the images into blocks; masking certain blocks; differencing corresponding blocks; median filtering the differences; low-pass filtering the outputs of the median filter; generating a normalized histogram for each output of the low-pass filter; generating a model of gaussian noise; calculating the distance between the noise model and each normalized histogram; comparing each distance calculated to a user-definable threshold; and determining if motion has occurred between the images if a certain number of distance calculations are at or above the user-definable threshold. If a scene is to be continuously monitored and no motion occurred between the previous reference image and the previous test image then a new test image is acquired and compared against the previous reference image as described above. If the scene is to be continuously monitored and motion has occurred between the previous reference image and the previous test image then replace the previous reference image with the previous test image, acquire a new test image, and compare the new test image to the new reference image as described above.

Description

FIELD OF THE INVENTION
This invention relates to image analysis and, more particularly, to the detection of motion that may have occurred between subsequent frames of an image.
BACKGROUND OF THE INVENTION
The ability to detect motion that may have occurred between subsequent frames, or images, in a video of a scene is useful in applications such as machine vision, motion compensation, and data compression. The most popular methods of motion detection and estimation utilize a “block matching” method.
In a block matching method, a first image of a scene (i.e., a reference image) and a second image (i.e., a test image) are segmented into N×M blocks. Each block is comprised of pixels. Corresponding blocks from the reference image and the test image are compared mathematically using some matching method. Corresponding pixels in corresponding blocks are called pixel pairs. Popular matching methods presently being used include a “mean of the absolute differences” (MAD) method and a “mean squared error” (MSE) method. MAD computes the sum of the absolute value of the difference of each pixel pair of the corresponding blocks in order to generate a score. MSE computes the sum of the square of the difference of each pixel pair of the corresponding blocks.
If little motion occurs between the reference image and the test image then the pixel pairs will cancel when differenced in both MAD and MSE and a low score will result. If sufficient motion has occurred then the pixel pair won't cancel and a high score will result. The score is then compared to a threshold to determine whether or not motion has occurred between the reference image and the test image.
If no motion has occurred between the reference image and the test image then the test image is discarded and a new test image is acquired. The steps described above are then repeated using the new test image in order to determine if motion has occurred between the reference image and the new test image.
If motion has occurred between the reference image and the test image then such motion is indicated, the reference image is discarded, the test image becomes the new reference image, a new test image is acquired, and the steps described above are repeated using the new reference image and the new test image to determine whether or not motion has occurred between the new reference image and the new test image.
MAD and MSE are easily implemented, but tend to perform badly when gaussian noise is present in two images that should not otherwise indicate that any motion has occurred. Gaussian noise occurs frequently in sonograms and radar images. One reason why MAD and MSE may falsely indicate motion when there isn't any if gaussian noise is present in the images is that gaussian noise in the reference frame may not be related at all to gaussian noise in the test image. So, the two gaussian noise images may not cancel when differenced. Gaussian noise minus gaussian noise results in gaussian noise. If two images should not indicate any motion but contain gaussian noise then MAD and MSE will assign a high score to the gaussian noise result instead of assigning a low score as expected.
Other prior art matching methods include cross-correlation and the Radon Transform. These methods perform better than MAD and MSE in the presence of gaussian noise, but falsely indicate motion in the presence of strong gaussian noise which commonly occurs in sonograms or radar images.
U.S. Pat. No. 3,641,257, entitled “NOISE SUPPRESSOR FOR SURVEILLANCE AND INTRUSION-DETECTING SYSTEM,” discloses a correlation method for dealing with noise in a surveillance and intrusion detection system. Correlation is insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images (i.e., strong gaussian noise). U.S. Pat. No. 3,641,257 is hereby incorporated by reference into the specification of the present invention.
U.S. Pat. No. 4,894,716, entitled “T.V. MOTION DETECTOR WITH FALSE ALARM IMMUNITY,” discloses the use of a low-pass filter and, mainly, a phase-detector to guard against noise-induced false detection of motion. Phase detection is insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images. U.S. Pat. No. 4,894,716 is hereby incorporated by reference into the specification of the present invention.
U.S. Pat. No. 5,111,511, entitled “IMAGE MOTION VECTOR DETECTING APPARATUS,” discloses the use of both a low-pass filter and a high-pass filter to guard against noise-induced false detection of motion. Using just a low-pass filter and a high-pass filter is insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images. U.S. Pat. No. 5,111,511 is hereby incorporated by reference into the specification of the present invention.
U.S. Pat. No. 5,157,732, entitled “MOTION VECTOR DETECTOR EMPLOYING IMAGE SUBREGIONS AND MEDIAN VALUES,” discloses the use of a median filter to guard against noise-induced false detection of motion. Median filtering by itself is insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images. U.S. Pat. No. 5,157,732 is hereby incorporated by reference into the specification of the present invention.
U.S. Pat. No. 5,365,603, entitled “METHOD FOR ANALYZING MOVEMENTS IN TEMPORAL SEQUENCES OF DIGITAL IMAGES,” discloses the use of a time-recursive filter to guard against noise-induced false detection of motion. Using just a time-recursive filter is insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images. U.S. Pat. No. 5,365,603 is hereby incorporated by reference into the specification of the present invention.
U.S. Pat. No. 5,535,302, entitled “METHOD AND APPARATUS FOR DETERMINING IMAGE AFFINE FLOW USING ARTIFICIAL NEURAL SYSTEM WITH SIMPLE CELLS AND LIE GERMS,” discloses the use of least square error fitting circuit based on a Lie group model to guard against noise-induced false detection of motion. Using just this matching method is insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images. U.S. Pat. No. 5,535,302 is hereby incorporated by reference into the specification of the present invention.
U.S. Pat. No. 5,543,858, entitled “METHOD OF AND APPARATUS FOR REDUCING NOISE IN A MOTION DETECTOR SIGNAL,” discloses the use of recursive filtering (or low-pass filtering) to guard against noise-induced false detection of motion. Using just recursive filtering is insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images. U.S. Pat. No. 5,543,858 is hereby incorporated by reference into the specification of the present invention.
U.S. Pat. No. 5,548,659, entitled “METHOD AND APPARATUS FOR DETECTING CHANGES IN DYNAMIC IMAGES,” discloses the use of a noise model based on the input images and the difference of the input images to guard against noise-induced false detection of motion. Such a noise model is insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images because gaussian noise does not cancel upon differencing with different gaussian noise but results in gaussian noise that may induce a false detection of motion. U.S. Pat. No. 5,548,659 is hereby incorporated by reference into the specification of the present invention.
U.S. Pat. No. 5,586,202, entitled “MOTION DETECTING APPARATUS,” does not address noise-induced false detection of motion, and is insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images. U.S. Pat. No. 5,586,202 is hereby incorporated by reference into the specification of the present invention.
U.S. Pat. No. 5,588,067, entitled “MOTION DETECTION AND IMAGE ACQUISITION APPARATUS AND METHOD OF DETECTING THE MOTION OF AND ACQUIRING AN IMAGE OF AN OBJECT,” discloses the use of a correlation function to guard against noise-induced false detection of motion. A correlation function is insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images. U.S. Pat. No. 5,588,067 is hereby incorporated by reference into the specification of the present invention.
U.S. Pat. No. 5,589,884, entitled “ADAPTIVE QUANTIZATION CONTROLLED BY SCENE CHANGE DETECTION,” does not address noise-induced false detection of motion and is, therefore, insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images. U.S. Pat. No. 5,589,884 is hereby incorporated by reference into the specification of the present invention.
U.S. Pat. No. 5,600,731, entitled “METHOD FOR TEMPORALLY ADAPTIVE FILTERING OF FRAMES OF A NOISY IMAGE SEQUENCE USING MOTION ESTIMATION,” discloses the use of linear minimum mean square error matching to guard against noise-induced false detection of motion. Using such a matching method is insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images. U.S. Pat. No. 5,600,731 is hereby incorporated by reference into the specification of the present invention.
U.S. Pat. No. 5,627,586, entitled “MOVING BODY DETECTION DEVICE OF CAMERA,” does not address noise-induced false detection of motion and is, therefore, insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images. U.S. Pat. No. 5,627,586 is hereby incorporated by reference into the specification of the present invention.
In an article entitled “High Speed High Accuracy Motion Detection and Tracking on the Parallel Pipelined Projection Engine,” published in New York, N.Y. by the IEEE on Sep. 6, 1989 in The Sixth Multidimensional Signal Processing Workshop, page 242, W. B. Baringer and R. W. Brodersen disclose the use of the Radon transform to reduce video noise. The Radon transform is insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images.
In an article entitled “Array Architecture for Block Matching Algorithms,” published in New York, N.Y. by the IEEE in 1989, pages 1301-1308, Thomas Komarek and Peter Pirsch disclose the use of the “mean of the absolute differences” (MAD) in a block matching algorithm. MAD is insufficient to guard against noise-induced false detection of motion in the presence of gaussian noise that is commonly found in sonograms or radar images.
SUMMARY OF THE INVENTION
It is an object of the present invention to detect motion between subsequent images of a scene.
It is another object of the present invention to detect motion between subsequent images of a scene in the presence of gaussian noise that would normally occur in a sonogram or a radar image.
The present invention is a device for and a method of detecting motion in subsequent images of a scene in the presence of gaussian noise that would normally occur in a sonogram or a radar image.
A first image, or reference image, is acquired from a scene. The reference image is comprised of pixels.
A second image, or test image, is acquired from the scene. The test image is also comprised of pixels.
The reference image and the test image are aligned to each other in order to eliminate one source of false motion detection.
The reference image and the test image are each divided into N×M blocks.
The user may define areas of the reference image and the test image to ignore or mask.
Corresponding blocks of the reference image and the test image that are to be considered are differenced (i.e., a pixel in one block is subtracted from the corresponding pixel in the corresponding block of the other image).
To eliminate shot, or spike, noise, that may cause a false detection of motion, each block of the difference between the reference image and the test image is median filtered.
To eliminate any false detection of motion that may be caused by jitter in the acquisition of the reference image or the test image, each median filtered block is low-pass filtered.
A normalized histogram is generated for each low-pass-filtered block.
A model of gaussian noise is generated.
A distance is calculated between each normalized histogram and the noise model.
Each calculated distance is compared to a user-defined threshold.
The user may define motion as having occurred if one calculated distance is at, or above, the user-definable threshold or if a user-definable number of calculated distances representing adjoining difference blocks of the reference image and the test image are at, or above, the user-definable threshold. The latter approach is preferred in the present invention.
Additional steps are required to continuously monitor the scene. If it was determined that no motion has occurred between the reference image and the test image then the test image is discarded and a new test image of the scene is acquired. The steps described above, starting with the step of dividing the reference image and the test image into N×M blocks, are then performed on the reference image and the new test image to determine whether or not motion has occurred between the reference image and the new test image.
If it was determined that motion has occurred been the reference image and the test image then the reference image is discarded, the test image is designated as the new reference image, and a new test image of the scene is acquired. The steps described above, starting with the step of dividing the reference image and the test image into N×M blocks, are then performed on the new reference image and the new test image to determine whether or not motion has occurred between the new reference image and the new test image.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a list of steps of the method of the present invention; and
FIG. 2 is a block diagram of the device of the present invention.
DETAILED DESCRIPTION
The present invention is a device for and a method of detecting motion between subsequent images of a scene. The present invention detects motion in the presence of gaussian noise that is commonly present in sonograms and radar images (i.e., a high level of gaussian noise or strong gaussian noise).
FIG. 1 is a list of the steps of the method of the present invention.
The first step in the method is to acquire a first image (hereinafter called a reference image) of the scene in which a user would like to detect if any motion has occurred. The reference image is comprised of pixels and is used to compare subsequent images of the scene in order to determine whether or not motion has occurred in the scene.
The second step in the method is to acquire a second image (hereinafter called a test image) of the scene. The test image is also comprised of pixels. The test image will be compared to the reference image in order to determine whether or not motion has occurred between the reference image and the test image.
The third step in the method is to align the reference image and the test image. This step is necessary to reduce false detection of motion which may be caused by camera panning or line jitter. Alignment may be done horizontally or vertically. In the preferred embodiment, alignment is done horizontally only.
The fourth step in the method is to divide the aligned reference image and the test image each into N×M blocks. The user may define the block size. Corresponding blocks in the reference image and the test image will later be compared in order to determine whether or not motion has occurred between the reference image and the test image.
The fifth step in the method is to allow the user to define those areas of the reference image, and corresponding areas in the test image, to ignore or mask. This allows a scene to be monitored that includes known motion that is of no interest (e.g., swaying trees, water fountains, etc.).
The sixth step in the method is to difference corresponding non-masked blocks in the reference image and the test image. That is, each non-masked block of the test image is subtracted from its corresponding block in the reference image on a pixel by pixel basis. If the two images for a particular block are identical then this step will result in no image data remaining.
Two images for a particular block may be identical in two ways. The first way is that the two images are identical with no noise present in either image. The second way is that the two pictorially identical images include noise that is identical in each image.
The two images for a particular block may be different in two ways. The first way is that motion between the two images causes the two images to be pictorially different. The second way is that the two images are pictorially identical but noise contained in one image may be different from noise contained in the other image. This second way may cause a false determination that motion has occurred between the two images.
Gaussian noise tends to be different from one image to the next. The probability that two images containing gaussian noise contain the exact same distribution of gaussian noise approaches zero exponentially as the number of pixels in the two images increases. The term “gaussian noise” means that the components of the noise has a gaussian distribution. That is, most of the components of the noise have an average value while the noise components that have values higher or lower than the average value are fewer in number and these values decrease the further away the component is from the average. In other words, the noise components form a bell-shaped curve. Just because two images contain the same bell-shaped distribution of noise components does not mean that the noise components occur in the same locations from one image to the next. Most likely, noise components do not occur in the same location from one image to the next. Typically, subtracting gaussian noise in one image from gaussian noise in a second image will not cancel the noise but will result in a difference that has a gaussian distribution. That is, the difference of two gaussian noise images looks like gaussian noise as well.
In the presence of strong gaussian noise (i.e., gaussian noise that would typically be encountered in a sonogram or a radar image) that is different from one image to the next and where the two images are pictorially identical, the sixth step of the method results in data that looks like gaussian noise. If little, or no, noise is present then the result of the sixth step may be little, or no, data. If the images are pictorially different (i.e.,, motion has occurred between the images) then the sixth step of the method, in the presence of much or little noise, results in data that has a distribution that is less gaussian.
So, the sixth step of the method is used to extract the differences between corresponding blocks of the reference image and the test image. The more gaussian is the distribution of the difference between a block of the reference image and its corresponding block in the test image the less likely that motion has occurred in that block. The less gaussian is the distribution of the difference between a block of the reference image and its corresponding block in the test image the more likely that motion has occurred in that block. The result of the sixth step is a set of at most N×M blocks that contain the differences between the reference image and the test image.
The seventh step in the method is to median filter each of the blocks resulting from the sixth step. Median filtering tends to eliminate shot, or spike, noise. To median filter data is to take the data point closest to the middle of the data as the value for each of the data points. For example, if three data points have values (5,10,5) then median filtering this data would result in the data being recognized as (5,5,5). The spike is totally eliminated and is not averaged into the other data. The result of the seventh step is at most N×M median filtered blocks.
The eighth step in the method is to low-pass filter each of the blocks resulting from the seventh step. Low-pass filtering smears any abrupt changes in an image that may be caused by jitter in the acquisition process (i.e., camera jitter). Without the eighth step, camera jitter of a scene that does not include any motion might be falsely determined to include motion. The result of the eighth step is at most N×M low-pass filtered blocks.
The ninth step in the method is to generate a normalized histogram for each of the blocks resulting from the eighth step. Each data point in a normalized histogram has a value (e.g., histogram_vectori). As mentioned above, the more gaussian is the block of the difference between a block of the reference image and its corresponding block in the test image the less likely motion has occurred in this difference block, and vice versa. A normalized histogram is useful for determining whether or not data is gaussian.
The tenth step in the method is to generate a model of gaussian noise. The noise model is in the form of a probability density function, where each point in the noise model has a value (e.g., noise_pdfi). The model is not generated from the reference image or the test image. The model is a purely mathematical description of gaussian noise. The model may be based on empirical data of similar scenes (e.g., sonograms, radar images, etc.) but is not derived from the scene presently being monitored. This model will be compared to each normalized histogram of step nine in order to determine whether or not motion has occurred in the block represented by the normalized histogram.
The eleventh step in the method is to calculate the distance between each normalized histogram generated in step nine and the noise model of step ten. The closer is a normalized histogram is to the noise model, the more likely the block represented by the normalized histogram is gaussian and the less likely that motion has occurred in that block. The farther a normalized histogram is from the noise model, the less likely the block represented by the normalized histogram is gaussian and the more likely that motion has occurred in that block. Suitable measures of distance include the Kullback-Leibler “goodness of fit” test, the chi-square test, the Kolmogorov-Smirnov test, and the Anderson-Darling test.
The Kullback-Leibler “goodness of fit” test, which is the preferred distance calculation method of the present invention, is calculated as follows: Distance = i = 0 L ( noise_pdf i ) log e ( ( block_size × noise_pdf i ) / histogram_vector i ) ) , where
Figure US06298144-20011002-M00001
L is the number of histogram bins, and
block_size is the size of the block in pixels.
The twelfth step in the method is to determine whether or not each of the distances calculated in step eleven is at or above a user-definable threshold.
The thirteenth step in the method is to determine if motion has occurred between the reference image and the test image. There are at least two ways to do this. First, if any distance calculation is at, or above, the user-definable threshold then it is determined that motion has occurred between the reference image and the test image. Second, if a user-definable number of distance calculations representing adjoining difference blocks of the reference image and the test image are at, or above, the user-definable threshold then it is determined that motion has occurred between the reference image and the test image. The second latter approach is the preferred step of the present invention for determining whether or not motion has occurred between the reference image and the test image.
To continuously monitor the scene in order to determine whether or not motion has occurred in the scene, additional steps need to be taken. The additional steps are determined by the result of step thirteen.
If no motion was determined in step thirteen then the fourteenth step of the method is to discard the test image, acquire a new test image of the scene, and return to step three for additional processing to determine whether or not motion has occurred between the reference image and the new test image.
If motion was determined in step thirteen then the fourteenth step of the method is to discard the reference image, rename the test image as the new reference image, acquire a new test image, and return to step three for additional processing in order to determine if motion has occurred between the new reference image and the new test image.
FIG. 2 is a block diagram 20 of the device of the present invention.
A first, or reference, image of a scene is acquired using an image grabber 21 along an input 22 to the image grabber 21. Image grabbers are commercially available. Alternatively, an image may also be acquired from an electronic file.
A second image, or test image, of the scene is also acquired along input 22 of the image grabber 21.
The reference image and the test image are provided along outputs 23 and 24, respectively, of the image grabber 21 to an image aligner 25. Image aligners are commercially available. The reference image and the test image must be aligned in order to reduce false detection of motion which may be caused by camera panning or line jitter during image acquisition. Alignment may be done horizontally, vertically, or a combination thereof. In the preferred embodiment, alignment is done horizontally only. The aligned images of the reference image and the test image appear at the outputs 26 and 27, respectively, of the image aligner 25.
The outputs 26 and 27 of the image aligner 25 are connected to a block divider 28. The block divider 28 divides the reference image and the test image into N×M blocks. The user may define the block size. The divided reference image and the divided test image appear at the outputs 29 and 30, respectively, of the block divider 28.
The user may ignore, or mask, certain areas of the divided reference image and divided test image. Masking may be necessary if motion is normally present in the images and should not be used to determine whether or not motion has occurred between the images. For example, wind may cause a tree that is present in the images to sway or the a waterfall may be present in the images. Such motions are not the type of motion that a user is typically looking for. Therefore, it would be convenient to mask such motion. Masking may be accomplished in the present invention by connecting the outputs 29 and 30 to a masker 31. The masker 31 may be a commercially digital signal processor that may be commanded to ignore certain areas of the signals provided thereto. The user may specify the areas that should be masked via a control signal along an input 32 to the masker 31. The masked, or unmasked, reference image and test image appears at the outputs 33 and 34, respectively, of the masker 31.
The outputs 33 and 34 of the masker 31 are differenced using a differencer (or subtractor) 35. Differencers, or subtractors, are known by those skilled in the art. The differencer 35 subtracts corresponding blocks between the reference image and the test image. The difference between the reference image and the test image appears at the output 36 of the differencer 35.
The output 36 of the differencer 35 is connected to a median filter 37. The median filter 37 eliminates shot, or spike, noise from each output 36 of the differencer 35. That is, the difference between each block of the reference image and its corresponding test image is median filtered.
The output of the median filter 38 is connected to a low-pass filter 39. The low-pass filter 39 smears any abrupt changes in a difference block that may be caused by jitter in the acquisition process (i.e., camera jitter). That is, each median-filtered difference block is low-pass filtered. Without this smearing, camera jitter of a scene that does not include any motion might be falsely determined to include motion. The low-pass filter 39 may be realized by a commercially available two-dimensional infinite impulse response (IIR) filter.
The output 40 of the low-pass filter 39 is connected to a histogram generator 41. The histogram generator 41 produces a normalized histogram for each of the difference blocks put out by the low-pass filter 39. Normalized histogram generators are known by those skilled in the art.
The output 42 of the histogram generator 41 is connected to a distance calculator 43. A model of gaussian noise is also provided to the distance calculator along an input 44 to the distance calculator 43. The distance calculator 43 calculates the distance between the noise model and each normalized histogram put out by the histogram generator 41. The closer a normalized histogram is to the noise model the more gaussian is the normalized histogram and the more likely that no motion has occurred between the corresponding blocks of the reference image and the test image represented by the normalized histogram, and vice versa. The distance calculator 43 may implement the Kullback-Leibler “goodness of fit” test, the chi-square test, the Kolmogorov-Smirnov test, or the Anderson-Darling test. In the preferred embodiment, the Kullback-Leibler “goodness of fit” test, which is described above, is implemented in the distance calculator. Any suitable computation device (e.g., central processing unit) that is available commercially may be programmed to compute one of the distance calculations mentioned above.
The output 45 of the distance calculator 43 is connected to a comparator 46. A user-definable threshold is applied along an input 47 to the comparator 46. Any suitable commercial comparator may be used for the comparator 46. The comparator 46 compares each distance calculation put out by the distance calculator 43 to the user-definable threshold. If an output 45 of the distance calculator 43, which is a measure of how gaussian is one difference block, is equal to or greater than the user-definable threshold then the difference block represented by the output 45 is determined to be gaussian. The comparator 46 may put out a logic one level at its output 48 to indicate that a particular input to the comparator 46 is determined to be gaussian and put out a logic zero at its output 48 if the particular input is determined to be non-gaussian.
The output 48 of the comparator 46 is connected to a controller 49. The determination whether or not each difference block is gaussian is passed from the comparator 46 to the controller 49. A non-gaussian determination for all of the difference blocks indicates that motion has not occurred between the reference image and the test image. In the present invention, motion between the reference image and the test image may be determined if one difference block is determined to be gaussian or a user-definable number of adjoining difference blocks are determined to be gaussian. In the preferred embodiment of the present invention, a user-definable number of adjoining difference blocks must be determined to be gaussian before motion is determined to have occurred between the reference image and the test image. An input 50 to the controller 49 allows the user to indicate how many adjoining difference blocks (e.g., N) must be determined to be gaussian before motion is determined.
The output 51 of the controller 49 indicates whether or not motion has occurred between the reference image and the test image. For example, a logic one output may indicate motion and a logic zero output may indicate no motion.
If the scene should be continuously monitored the output 51 of the controller 49 is used to determine what images should be used for the next reference image and the next test image. The output 51 is connected to the image grabber 21 in order to control what images are used for the next reference image and the next test image. If the output 51 of the controller indicates that no motion has occurred between the original reference image and the original test image then the present reference image will be used for the next reference image and a new test image will be acquired by the image grabber 21. The new test image will then be compared to the original reference image, as described above, to determine whether or not motion has occurred between these images.
If motion has occurred between the original reference image and the original test image then the original test image is used as the next reference image and a new test image is acquired by the image grabber 21. The new test image will then be compared to the new reference image, as described above, to determine whether or not motion has occurred between these images.
This process may be repeated for as long as the user wishes to monitor the scene.

Claims (14)

What is claimed is:
1. A method of detecting motion between a reference image and a test image, comprising the steps of:
a) acquiring the reference image;
b) acquiring the test image;
c) aligning the reference image and the test image;
d) dividing the reference image and the test image each into N×M blocks, where N and M are user-definable integers;
e) masking a user-definable number of blocks of the reference image and the test image;
f) differencing each pair of corresponding blocks in the reference image and the test image;
g) median filtering each result of the last step;
h) low-pass filtering each result of the last step;
i) generating a normalized histogram for each result of the last step;
j) generating a model of gaussian noise;
k) calculating a distance between the model of gaussian noise and each result of step (i);
l) comparing the results of the last step to a user-definable threshold; and
m) determining that no motion has occurred between the reference image and the test image if a user-definable number of results of the last step where below the user-definable threshold, otherwise determining that motion has occurred between the reference image and the test image.
2. The method of claim 1, further including the step of discarding the test image, acquiring a new test image, and returning to step (c) if no motion has been determined in step (m), otherwise, replacing the reference image with the test image, acquiring a new test image, and returning to step (c).
3. The method of claim 1, wherein said step of aligning the reference image and the test image is comprised of using an alignment technique selected from the group consisting of horizontal alignment, vertical alignment, and horizontal and vertical alignment.
4. The method of claim 1, wherein said step of generating a model of gaussian noise comprises generating a model of gaussian noise for noise commonly encountered in sonograms.
5. The method of claim 1, wherein said step of generating a model of gaussian noise comprises generating a model of gaussian noise for noise commonly encountered in radar images.
6. The method of claim 1, wherein said step of calculating a distance comprises calculating a distance using a distance calculation selected from the group consisting of a Kullback-Leibler “goodness of fit” test, a chi-square test, a Kolmogorov-Smirnov test, and an Anderson-Darling test.
7. The method of claim 2, wherein said step of aligning the reference image and the test image is comprised of using an alignment technique selected from the group consisting of horizontal alignment, vertical alignment, and horizontal and vertical alignment.
8. The method of claim 7, wherein said step of generating a model of gaussian noise comprises generating a model of gaussian noise for noise commonly encountered in sonograms.
9. The method of claim 8, wherein said step of generating a model of gaussian noise comprises generating a model of gaussian noise for noise commonly encountered in radar images.
10. The method of claim 9, wherein said step of calculating a distance comprises calculating a distance using a distance calculation selected from the group consisting of a Kullback-Leibler “goodness of fit” test, a chi-square test, a Kolmogorov-Smirnov test, and an Anderson-Darling test.
11. A device for detecting motion between a reference image and a test image, comprising:
a) an image grabber, having a first input for acquiring the reference image and the test image, having a second input, having a first output for transmitting the reference image, and having a second output for transmitting the test image;
b) an image aligner, having a first input connected to the first output of said image grabber, having a second input connected to the second output of said image grabber, having a first output for transmitting the aligned reference image, and having a second output for transmitting the aligned test image;
c) a block divider, having a first input connected to the first output of said image aligner, having a second input connected to the second output of said image aligner, having a first output for transmitting a divided reference image, and having a second output for transmitting a divided test image;
d) a masker, having a first input connected to the first output of said block divider, having a second input connected to the second output of said block divider, having a third input for receiving a control signal that indicates which blocks should be masked, having a first output for transmitting un-masked blocks of the reference image, and having a second output for transmitting a un-masked blocks of the test image;
e) a differencer, having a first input connected to the first output of said masker, having a second input connected to the second output of said masker, and having an output for transmitting blocks that represent the differences between the blocks of the reference image and corresponding blocks of the test image;
f) a median filter, having an input connected to the output of said differencer, and having an output;
g) a low-pass filter, having an input connected to the output of said median filter, and having an output;
h) a histogram generator, having an input connected to the output of said low-pass filter, and having an output;
i) a distance calculator, having a first input connected to the output of said histogram generator, having a second input for receiving a model of gaussian noise, and having an output;
j) a comparator, having a first input connected to the output of said distance calculator, having a second input for receiving a user-definable threshold, and having an output; and
k) a controller, having a first input connected to the output of said comparator, having a second input for receiving a user-definable number, and having an output connected to the second input of said image grabber.
12. The device of claim 11, wherein said low-pass filter is comprised of a two-dimensional infinite impulse response filter.
13. The device of claim 11, wherein said distance calculator implements distance calculations selected from the group consisting of a Kullback-Leibler “goodness of fit” test, a chi-square test, a Kolmogorov-Smirnov test, and an Anderson-Darling test.
14. The device of claim 12, wherein said distance calculator implements distance calculations selected from the group consisting of a Kullback-Leibler “goodness of fit” test, a chi-square test, and an Anderson-Darling test.
US09/081,608 1998-05-20 1998-05-20 Device for and method of detecting motion in an image Expired - Fee Related US6298144B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/081,608 US6298144B1 (en) 1998-05-20 1998-05-20 Device for and method of detecting motion in an image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/081,608 US6298144B1 (en) 1998-05-20 1998-05-20 Device for and method of detecting motion in an image

Publications (1)

Publication Number Publication Date
US6298144B1 true US6298144B1 (en) 2001-10-02

Family

ID=22165222

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/081,608 Expired - Fee Related US6298144B1 (en) 1998-05-20 1998-05-20 Device for and method of detecting motion in an image

Country Status (1)

Country Link
US (1) US6298144B1 (en)

Cited By (63)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020009210A1 (en) * 2000-05-19 2002-01-24 Piotr Wilinski Method, system and apparatus
US20020039135A1 (en) * 1999-12-23 2002-04-04 Anders Heyden Multiple backgrounds
US6430304B2 (en) * 1998-08-28 2002-08-06 Sarnoff Corporation Method and apparatus for processing images to compute image flow information
US20020136540A1 (en) * 1997-10-06 2002-09-26 Dvdo, Inc. Digital video system and methods for providing same
US20020163595A1 (en) * 1999-08-11 2002-11-07 Adams Dale R. Interlace motion artifact detection using vertical frequency detection and analysis
US20020174223A1 (en) * 1999-10-27 2002-11-21 Netbotz Inc. Method and apparatus for replay of historical oath
US6496592B1 (en) * 1998-07-13 2002-12-17 Oerlikon Contraves Ag Method for tracking moving object by means of specific characteristics
US20020190229A1 (en) * 2001-06-18 2002-12-19 Casio Computer Co., Ltd. Photosensor system and drive control method thereof
US20030052996A1 (en) * 1998-09-15 2003-03-20 Dvdo, Inc. Method and apparatus for detecting and smoothing diagonal features in video images
US20030095189A1 (en) * 2001-11-13 2003-05-22 Xinqiao Liu Motion/saturation detection system and method for synthesizing high dynamic range motion blur free images from multiple captures
US20030098919A1 (en) * 2001-11-13 2003-05-29 Xinqiao Liu Photocurrent estimation from multiple captures for simultaneous SNR and dynamic range improvement in CMOS image sensors
US6650273B1 (en) * 2002-05-06 2003-11-18 Lockheed Martin Corporation Change subtraction of synthetic aperture radar data
US20040021775A1 (en) * 2001-06-05 2004-02-05 Tetsujiro Kondo Image processing device
US6690769B2 (en) * 2001-02-24 2004-02-10 Telesector Resource Group, Inc. Hand-held telecommunication loop testing device
US20040056978A1 (en) * 1998-08-11 2004-03-25 Dvdo, Inc. Method and apparatus for deinterlacing digital video images
US20040062439A1 (en) * 2002-09-27 2004-04-01 Eastman Kodak Company Method and system for generating a foreground mask for a composite image
US6786730B2 (en) 2002-03-01 2004-09-07 Accelerized Golf Llc Ergonomic motion and athletic activity monitoring and training system and method
US20040175057A1 (en) * 2003-03-04 2004-09-09 Thomas Tsao Affine transformation analysis system and method for image matching
US6809758B1 (en) * 1999-12-29 2004-10-26 Eastman Kodak Company Automated stabilization method for digital image sequences
US20040246336A1 (en) * 2003-06-04 2004-12-09 Model Software Corporation Video surveillance system
US20040252763A1 (en) * 2001-11-07 2004-12-16 Mertens Mark Jozef Willem Occlusion detector for and method of detecting occlusion areas
US6867814B2 (en) 2000-04-18 2005-03-15 Silicon Image, Inc. Method, system and article of manufacture for identifying the source type and quality level of a video sequence
US20050062892A1 (en) * 1998-07-23 2005-03-24 Dvdo, Inc. Method and apparatus for reducing on-chip memory in vertical video processing
USRE38908E1 (en) * 1999-03-05 2005-12-06 Biscom, Inc. Security system for notification of an undesired condition at a monitored area with minimized false alarms
US20060109903A1 (en) * 2004-03-15 2006-05-25 James Bergen Method and apparatus for providing noise reduction
US20070013777A1 (en) * 2005-07-15 2007-01-18 Sony Corporation Imaging device and imaging method
US20070064798A1 (en) * 2005-09-16 2007-03-22 Sony Corporation Classified filtering for temporal prediction
US20070085678A1 (en) * 2003-04-14 2007-04-19 Ryan Joy Environmental monitoring device
US20070211167A1 (en) * 1998-10-05 2007-09-13 Adams Dale R Digital video system and methods for providing same
US20070223596A1 (en) * 2006-03-24 2007-09-27 Zhe Fan Spurious Motion Filter
US20080056399A1 (en) * 1998-08-10 2008-03-06 Kamilo Feher GMSK and OFDM nonlinearly and linearly amplified cellular and wireless networks
US20080205525A1 (en) * 2005-05-31 2008-08-28 Koninklijke Philips Electronics, N.V. Calculating Transformation Parameters For Image Processing
US20080273815A1 (en) * 2007-05-04 2008-11-06 Thomson Licensing Method and device for retrieving a test block from a blockwise stored reference image
US20080292185A1 (en) * 2007-05-24 2008-11-27 Seiji Kimura Video signal processing method, program for the video signal processing method, recording medium recording the program for the video signal processing method, and video signal processing apparatus
US20080297474A1 (en) * 2004-08-17 2008-12-04 Mikko Blomqvist Electronic Device and a Method for Controlling the Functions of the Electronic Device as Well as Program Product for Implementing the Method
US20080310731A1 (en) * 2007-06-18 2008-12-18 Zeitera, Llc Methods and Apparatus for Providing a Scalable Identification of Digital Video Sequences
US20090041297A1 (en) * 2005-05-31 2009-02-12 Objectvideo, Inc. Human detection and tracking for security applications
US20090080523A1 (en) * 2007-09-24 2009-03-26 Microsoft Corporation Remote user interface updates using difference and motion encoding
US20090100483A1 (en) * 2007-10-13 2009-04-16 Microsoft Corporation Common key frame caching for a remote user interface
US20090100125A1 (en) * 2007-10-11 2009-04-16 Microsoft Corporation Optimized key frame caching for remote interface rendering
US20090097751A1 (en) * 2007-10-12 2009-04-16 Microsoft Corporation Remote user interface raster segment motion detection and encoding
US7639841B2 (en) * 2004-12-20 2009-12-29 Siemens Corporation System and method for on-road detection of a vehicle using knowledge fusion
US20090322882A1 (en) * 2008-06-27 2009-12-31 Sony Corporation Image processing apparatus, image apparatus, image processing method, and program
US20100091126A1 (en) * 2008-10-14 2010-04-15 Sony Corporation Method and unit for motion detection based on a difference histogram
US20100157078A1 (en) * 2008-12-19 2010-06-24 Qualcomm Incorporated High dynamic range image combining
US20100157079A1 (en) * 2008-12-19 2010-06-24 Qualcomm Incorporated System and method to selectively combine images
US7779026B2 (en) 2002-05-03 2010-08-17 American Power Conversion Corporation Method and apparatus for collecting and displaying network device information
US20100214447A1 (en) * 2009-02-20 2010-08-26 Kei Itoh Imaging device and image method
US20100214425A1 (en) * 2007-03-26 2010-08-26 Pelco, Inc. Method of improving the video images from a video camera
US20100245600A1 (en) * 2009-03-30 2010-09-30 Chang Soon-Keun Digital photographing device, method of controlling the digital photographing device, and computer-readable storage medium
US20100271498A1 (en) * 2009-04-22 2010-10-28 Qualcomm Incorporated System and method to selectively combine video frame image data
US20110122293A1 (en) * 2008-01-07 2011-05-26 Akira Yamada Image processing unit
US8005944B2 (en) 1999-10-27 2011-08-23 American Power Conversion Corporation Method and system for monitoring computer networks and equipment
US8015255B2 (en) 2003-10-27 2011-09-06 American Power Conversion Corporation System and method for network device communication
US20110243419A1 (en) * 2010-03-30 2011-10-06 Siemens Aktiengesellschaft Multisegment Picture Reconstruction For Cardio CT Pictures
US8271626B2 (en) 2001-01-26 2012-09-18 American Power Conversion Corporation Methods for displaying physical network topology and environmental status by location, organization, or responsible party
US8566292B2 (en) 2003-04-14 2013-10-22 Schneider Electric It Corporation Method and system for journaling and accessing sensor and configuration data
JPWO2012093663A1 (en) * 2011-01-06 2014-06-09 株式会社ニコン Image processing apparatus, imaging apparatus, and image processing program
WO2014182156A1 (en) * 2013-05-08 2014-11-13 Mimos Berhad A system and method for detecting an object
US8990536B2 (en) 2011-06-01 2015-03-24 Schneider Electric It Corporation Systems and methods for journaling and executing device control instructions
US9952103B2 (en) 2011-12-22 2018-04-24 Schneider Electric It Corporation Analysis of effect of transient events on temperature in a data center
CN110363197A (en) * 2019-06-22 2019-10-22 东北电力大学 Based on the video area-of-interest exacting method for improving visual background extraction model
US11076507B2 (en) 2007-05-15 2021-07-27 Schneider Electric It Corporation Methods and systems for managing facility power and cooling

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3641257A (en) 1970-01-16 1972-02-08 Hughes Aircraft Co Noise suppressor for surveillance and intrusion-detecting system
US4811090A (en) * 1988-01-04 1989-03-07 Hypervision Image emission microscope with improved image processing capability
US4894716A (en) 1989-04-20 1990-01-16 Burle Technologies, Inc. T.V. motion detector with false alarm immunity
US4918633A (en) * 1985-11-25 1990-04-17 Eastman Kodak Company Digital image noise reduction method and transmission system
US5111511A (en) 1988-06-24 1992-05-05 Matsushita Electric Industrial Co., Ltd. Image motion vector detecting apparatus
US5157732A (en) 1989-03-20 1992-10-20 Matsushita Electric Industrial Co., Ltd. Motion vector detector employing image subregions and median values
US5365603A (en) 1990-07-27 1994-11-15 Siemens Aktiengesellschaft Method for analyzing movements in temporal sequences of digital images
US5504473A (en) * 1993-07-22 1996-04-02 Digital Security Controls Ltd. Method of analyzing signal quality
US5535302A (en) 1994-12-01 1996-07-09 Tsao; Tien-Ren Method and apparatus for determining image affine flow using artifical neural system with simple cells and lie germs
US5543858A (en) 1993-06-11 1996-08-06 U. S. Philips Corporation Method of and apparatus for reducing noise in a motion detector signal
US5548659A (en) 1991-12-27 1996-08-20 Kabushiki Kaisha Toshiba Method and apparatus for detecting changes in dynamic images
US5586202A (en) 1991-01-31 1996-12-17 Sony Corporation Motion detecting apparatus
US5588067A (en) 1993-02-19 1996-12-24 Peterson; Fred M. Motion detection and image acquisition apparatus and method of detecting the motion of and acquiring an image of an object
US5589884A (en) 1993-10-01 1996-12-31 Toko Kabushiki Kaisha Adaptive quantization controlled by scene change detection
US5600731A (en) 1991-05-09 1997-02-04 Eastman Kodak Company Method for temporally adaptive filtering of frames of a noisy image sequence using motion estimation
US5627586A (en) 1992-04-09 1997-05-06 Olympus Optical Co., Ltd. Moving body detection device of camera
US5644386A (en) * 1995-01-11 1997-07-01 Loral Vought Systems Corp. Visual recognition system for LADAR sensors
US6084981A (en) * 1994-11-29 2000-07-04 Hitachi Medical Corporation Image processing apparatus for performing image converting process by neural network

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3641257A (en) 1970-01-16 1972-02-08 Hughes Aircraft Co Noise suppressor for surveillance and intrusion-detecting system
US4918633A (en) * 1985-11-25 1990-04-17 Eastman Kodak Company Digital image noise reduction method and transmission system
US4811090A (en) * 1988-01-04 1989-03-07 Hypervision Image emission microscope with improved image processing capability
US5111511A (en) 1988-06-24 1992-05-05 Matsushita Electric Industrial Co., Ltd. Image motion vector detecting apparatus
US5157732A (en) 1989-03-20 1992-10-20 Matsushita Electric Industrial Co., Ltd. Motion vector detector employing image subregions and median values
US4894716A (en) 1989-04-20 1990-01-16 Burle Technologies, Inc. T.V. motion detector with false alarm immunity
US5365603A (en) 1990-07-27 1994-11-15 Siemens Aktiengesellschaft Method for analyzing movements in temporal sequences of digital images
US5586202A (en) 1991-01-31 1996-12-17 Sony Corporation Motion detecting apparatus
US5600731A (en) 1991-05-09 1997-02-04 Eastman Kodak Company Method for temporally adaptive filtering of frames of a noisy image sequence using motion estimation
US5548659A (en) 1991-12-27 1996-08-20 Kabushiki Kaisha Toshiba Method and apparatus for detecting changes in dynamic images
US5627586A (en) 1992-04-09 1997-05-06 Olympus Optical Co., Ltd. Moving body detection device of camera
US5588067A (en) 1993-02-19 1996-12-24 Peterson; Fred M. Motion detection and image acquisition apparatus and method of detecting the motion of and acquiring an image of an object
US5543858A (en) 1993-06-11 1996-08-06 U. S. Philips Corporation Method of and apparatus for reducing noise in a motion detector signal
US5504473A (en) * 1993-07-22 1996-04-02 Digital Security Controls Ltd. Method of analyzing signal quality
US5589884A (en) 1993-10-01 1996-12-31 Toko Kabushiki Kaisha Adaptive quantization controlled by scene change detection
US6084981A (en) * 1994-11-29 2000-07-04 Hitachi Medical Corporation Image processing apparatus for performing image converting process by neural network
US5535302A (en) 1994-12-01 1996-07-09 Tsao; Tien-Ren Method and apparatus for determining image affine flow using artifical neural system with simple cells and lie germs
US5644386A (en) * 1995-01-11 1997-07-01 Loral Vought Systems Corp. Visual recognition system for LADAR sensors

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Rogers, L.S. et al., "SAR Image despeckling using the multiscale edge representation", IEEE International Geoscience and Remote sensing symposium, Jul. 1998.*
Thomas Komarek et al., "Array Architecture for Block Matching Algorithms", IEEE Transactions on Circuits and Systems, vol. 36, No. 10, Oct. 1989.
W.B. Baringer et al., "High Speed High Accuracy Motion Detection and Tracking on the Parallel Piplined Protection Engine," Sixth MDSP Workshop, 1989, p. 70, Session TAI, IEEE, N.Y., N.Y., USA.

Cited By (124)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7215376B2 (en) 1997-10-06 2007-05-08 Silicon Image, Inc. Digital video system and methods for providing same
US20020136540A1 (en) * 1997-10-06 2002-09-26 Dvdo, Inc. Digital video system and methods for providing same
US6496592B1 (en) * 1998-07-13 2002-12-17 Oerlikon Contraves Ag Method for tracking moving object by means of specific characteristics
US20050062892A1 (en) * 1998-07-23 2005-03-24 Dvdo, Inc. Method and apparatus for reducing on-chip memory in vertical video processing
US20080056399A1 (en) * 1998-08-10 2008-03-06 Kamilo Feher GMSK and OFDM nonlinearly and linearly amplified cellular and wireless networks
US20060262217A1 (en) * 1998-08-11 2006-11-23 Dvdo, Inc. Method and apparatus for detecting frequency in digital video images
US7499103B2 (en) * 1998-08-11 2009-03-03 Silicon Image, Inc. Method and apparatus for detecting frequency in digital video images
US7027099B2 (en) 1998-08-11 2006-04-11 Silicon Image Method and apparatus for deinterlacing digital video images
US20040056978A1 (en) * 1998-08-11 2004-03-25 Dvdo, Inc. Method and apparatus for deinterlacing digital video images
US6430304B2 (en) * 1998-08-28 2002-08-06 Sarnoff Corporation Method and apparatus for processing images to compute image flow information
US20030052996A1 (en) * 1998-09-15 2003-03-20 Dvdo, Inc. Method and apparatus for detecting and smoothing diagonal features in video images
US6829013B2 (en) 1998-09-15 2004-12-07 Silicon Image, Inc. Method and apparatus for detecting and smoothing diagonal features in video images
US20070211167A1 (en) * 1998-10-05 2007-09-13 Adams Dale R Digital video system and methods for providing same
USRE38908E1 (en) * 1999-03-05 2005-12-06 Biscom, Inc. Security system for notification of an undesired condition at a monitored area with minimized false alarms
US7633559B2 (en) 1999-08-11 2009-12-15 Silicon Image, Inc. Interlace motion artifact detection using vertical frequency detection and analysis
US6909469B2 (en) * 1999-08-11 2005-06-21 Silicon Image, Inc. Interlace motion artifact detection using vertical frequency detection and analysis
US20020163595A1 (en) * 1999-08-11 2002-11-07 Adams Dale R. Interlace motion artifact detection using vertical frequency detection and analysis
US20080122974A1 (en) * 1999-08-11 2008-05-29 Adams Dale R Interlace Motion Artifact Detection Using Vertical Frequency Detection And Analysis
US7391481B2 (en) 1999-08-11 2008-06-24 Silicon Image, Inc. Interlace motion artifact detection using vertical frequency detection and analysis
US8024451B2 (en) 1999-10-27 2011-09-20 American Power Conversion Corporation Method and system for monitoring computer networks and equipment
US8005944B2 (en) 1999-10-27 2011-08-23 American Power Conversion Corporation Method and system for monitoring computer networks and equipment
US20020174223A1 (en) * 1999-10-27 2002-11-21 Netbotz Inc. Method and apparatus for replay of historical oath
US8090817B2 (en) 1999-10-27 2012-01-03 American Power Conversion Corporation Method and system for monitoring computer networks and equipment
US8224953B2 (en) 1999-10-27 2012-07-17 American Power Conversion Corporation Method and apparatus for replay of historical data
US6819353B2 (en) * 1999-12-23 2004-11-16 Wespot Ab Multiple backgrounds
US20020039135A1 (en) * 1999-12-23 2002-04-04 Anders Heyden Multiple backgrounds
US6809758B1 (en) * 1999-12-29 2004-10-26 Eastman Kodak Company Automated stabilization method for digital image sequences
US6867814B2 (en) 2000-04-18 2005-03-15 Silicon Image, Inc. Method, system and article of manufacture for identifying the source type and quality level of a video sequence
US6810134B2 (en) * 2000-05-19 2004-10-26 Koninklijke Philips Electronics N.V. Method, system and apparatus for choosing an optimal candidate value for block matching
US20020009210A1 (en) * 2000-05-19 2002-01-24 Piotr Wilinski Method, system and apparatus
US8966044B2 (en) 2001-01-26 2015-02-24 Schneider Electric It Corporation Methods for displaying physical network topology and environmental status by location, organization, or responsible party
US8271626B2 (en) 2001-01-26 2012-09-18 American Power Conversion Corporation Methods for displaying physical network topology and environmental status by location, organization, or responsible party
US6690769B2 (en) * 2001-02-24 2004-02-10 Telesector Resource Group, Inc. Hand-held telecommunication loop testing device
US7139019B2 (en) * 2001-06-05 2006-11-21 Sony Corporation Image processing device
US20040021775A1 (en) * 2001-06-05 2004-02-05 Tetsujiro Kondo Image processing device
US7428010B2 (en) 2001-06-18 2008-09-23 Casio Computer Co., Ltd. Photosensor system and drive control method for optimal sensitivity
US20070206113A1 (en) * 2001-06-18 2007-09-06 Casio Computer Co., Ltd. Photosensor system and drive control method for optimal sensitivity
US7268807B2 (en) * 2001-06-18 2007-09-11 Casio Computer Co., Ltd. Photosensor system and drive control method for optimal sensitivity
US20020190229A1 (en) * 2001-06-18 2002-12-19 Casio Computer Co., Ltd. Photosensor system and drive control method thereof
US20040252763A1 (en) * 2001-11-07 2004-12-16 Mertens Mark Jozef Willem Occlusion detector for and method of detecting occlusion areas
US7995793B2 (en) * 2001-11-07 2011-08-09 Trident Microsystems (Far East) Ltd. Occlusion detector for and method of detecting occlusion areas
US20030095189A1 (en) * 2001-11-13 2003-05-22 Xinqiao Liu Motion/saturation detection system and method for synthesizing high dynamic range motion blur free images from multiple captures
US20030098919A1 (en) * 2001-11-13 2003-05-29 Xinqiao Liu Photocurrent estimation from multiple captures for simultaneous SNR and dynamic range improvement in CMOS image sensors
US7061524B2 (en) * 2001-11-13 2006-06-13 The Board Of Trustees Of The Leland Stanford Junior University Motion/saturation detection system and method for synthesizing high dynamic range motion blur free images from multiple captures
US7009636B2 (en) * 2001-11-13 2006-03-07 The Board Of Trustees Of The Leland Stanford Junior University Photocurrent estimation from multiple captures for simultaneous SNR and dynamic range improvement in CMOS image sensors
US6786730B2 (en) 2002-03-01 2004-09-07 Accelerized Golf Llc Ergonomic motion and athletic activity monitoring and training system and method
US7779026B2 (en) 2002-05-03 2010-08-17 American Power Conversion Corporation Method and apparatus for collecting and displaying network device information
US8719319B2 (en) 2002-05-03 2014-05-06 Schneider Electric It Corporation Method and apparatus for collecting and displaying network device information
US7958170B2 (en) 2002-05-03 2011-06-07 American Power Conversion Corporation Method and apparatus for collecting and displaying data associated with network devices
US8019798B2 (en) 2002-05-03 2011-09-13 American Power Conversion Corporation Method and apparatus for collecting and displaying network device information
US6650273B1 (en) * 2002-05-06 2003-11-18 Lockheed Martin Corporation Change subtraction of synthetic aperture radar data
US7024054B2 (en) * 2002-09-27 2006-04-04 Eastman Kodak Company Method and system for generating a foreground mask for a composite image
US20040062439A1 (en) * 2002-09-27 2004-04-01 Eastman Kodak Company Method and system for generating a foreground mask for a composite image
US20040175057A1 (en) * 2003-03-04 2004-09-09 Thomas Tsao Affine transformation analysis system and method for image matching
US7986224B2 (en) 2003-04-14 2011-07-26 American Power Conversion Corporation Environmental monitoring device
US8566292B2 (en) 2003-04-14 2013-10-22 Schneider Electric It Corporation Method and system for journaling and accessing sensor and configuration data
US20070085678A1 (en) * 2003-04-14 2007-04-19 Ryan Joy Environmental monitoring device
US8605155B2 (en) 2003-06-04 2013-12-10 Model Software Corporation Video surveillance system
US7859564B2 (en) 2003-06-04 2010-12-28 Model Software Corporation Video surveillance system
US20040246336A1 (en) * 2003-06-04 2004-12-09 Model Software Corporation Video surveillance system
US8015255B2 (en) 2003-10-27 2011-09-06 American Power Conversion Corporation System and method for network device communication
US7702178B2 (en) * 2004-03-15 2010-04-20 Sarnoff Corporation Method and apparatus for providing noise reduction
US20060109903A1 (en) * 2004-03-15 2006-05-25 James Bergen Method and apparatus for providing noise reduction
US10126834B2 (en) * 2004-08-17 2018-11-13 Conversant Wireless Licensing S.a.r.l. Electronic device and a method for controlling the functions of the electronic device as well as program product for implementing the method
US8970490B2 (en) * 2004-08-17 2015-03-03 Core Wireless Licensing S.A.R.L. Electronic device and a method for controlling the functions of the electronic device as well as program product for implementing the method
US20150138090A1 (en) * 2004-08-17 2015-05-21 Core Wireless Licensing S.A.R.L. Electronic device and a method for controlling the functions of the electronic device as well as program product for implementing the method
US20080297474A1 (en) * 2004-08-17 2008-12-04 Mikko Blomqvist Electronic Device and a Method for Controlling the Functions of the Electronic Device as Well as Program Product for Implementing the Method
US7639841B2 (en) * 2004-12-20 2009-12-29 Siemens Corporation System and method for on-road detection of a vehicle using knowledge fusion
US20090041297A1 (en) * 2005-05-31 2009-02-12 Objectvideo, Inc. Human detection and tracking for security applications
CN101189878B (en) * 2005-05-31 2010-10-27 三叉微系统(远东)有限公司 Calculating transformation parameters for image processing
US8442118B2 (en) 2005-05-31 2013-05-14 Entropic Communications, Inc. Calculating transformation parameters for image processing
US20080205525A1 (en) * 2005-05-31 2008-08-28 Koninklijke Philips Electronics, N.V. Calculating Transformation Parameters For Image Processing
US7889241B2 (en) * 2005-07-15 2011-02-15 Sony Corporation Imaging device and imaging method
US8081233B2 (en) 2005-07-15 2011-12-20 Sony Corporation Imaging device and imaging method
US20100265361A1 (en) * 2005-07-15 2010-10-21 Sony Corporation Imaging device and imaging method
US20070013777A1 (en) * 2005-07-15 2007-01-18 Sony Corporation Imaging device and imaging method
US7894522B2 (en) * 2005-09-16 2011-02-22 Sony Corporation Classified filtering for temporal prediction
US20070064798A1 (en) * 2005-09-16 2007-03-22 Sony Corporation Classified filtering for temporal prediction
US8125522B2 (en) * 2006-03-24 2012-02-28 Siemens Industry, Inc. Spurious motion filter
WO2008115184A1 (en) * 2006-03-24 2008-09-25 Siemens Building Technologies, Inc. Spurious motion filter
CN101411190B (en) * 2006-03-24 2013-01-02 西门子工业公司 Spurious motion filter
US20070223596A1 (en) * 2006-03-24 2007-09-27 Zhe Fan Spurious Motion Filter
US20100214425A1 (en) * 2007-03-26 2010-08-26 Pelco, Inc. Method of improving the video images from a video camera
US8269844B2 (en) * 2007-03-26 2012-09-18 Pelco, Inc. Method of improving video images by removing the effects of camera vibration
US20080273815A1 (en) * 2007-05-04 2008-11-06 Thomson Licensing Method and device for retrieving a test block from a blockwise stored reference image
US11503744B2 (en) 2007-05-15 2022-11-15 Schneider Electric It Corporation Methods and systems for managing facility power and cooling
US11076507B2 (en) 2007-05-15 2021-07-27 Schneider Electric It Corporation Methods and systems for managing facility power and cooling
US20080292185A1 (en) * 2007-05-24 2008-11-27 Seiji Kimura Video signal processing method, program for the video signal processing method, recording medium recording the program for the video signal processing method, and video signal processing apparatus
US8126266B2 (en) * 2007-05-24 2012-02-28 Sony Corporation Video signal processing method, program for the video signal processing method, recording medium recording the program for the video signal processing method, and video signal processing apparatus
US8229227B2 (en) * 2007-06-18 2012-07-24 Zeitera, Llc Methods and apparatus for providing a scalable identification of digital video sequences
US20080310731A1 (en) * 2007-06-18 2008-12-18 Zeitera, Llc Methods and Apparatus for Providing a Scalable Identification of Digital Video Sequences
US8127233B2 (en) 2007-09-24 2012-02-28 Microsoft Corporation Remote user interface updates using difference and motion encoding
US20090080523A1 (en) * 2007-09-24 2009-03-26 Microsoft Corporation Remote user interface updates using difference and motion encoding
US20090100125A1 (en) * 2007-10-11 2009-04-16 Microsoft Corporation Optimized key frame caching for remote interface rendering
US8619877B2 (en) 2007-10-11 2013-12-31 Microsoft Corporation Optimized key frame caching for remote interface rendering
US8358879B2 (en) 2007-10-12 2013-01-22 Microsoft Corporation Remote user interface raster segment motion detection and encoding
US20090097751A1 (en) * 2007-10-12 2009-04-16 Microsoft Corporation Remote user interface raster segment motion detection and encoding
US8121423B2 (en) 2007-10-12 2012-02-21 Microsoft Corporation Remote user interface raster segment motion detection and encoding
US20090100483A1 (en) * 2007-10-13 2009-04-16 Microsoft Corporation Common key frame caching for a remote user interface
US8106909B2 (en) 2007-10-13 2012-01-31 Microsoft Corporation Common key frame caching for a remote user interface
US20110122293A1 (en) * 2008-01-07 2011-05-26 Akira Yamada Image processing unit
US9342896B2 (en) * 2008-06-27 2016-05-17 Sony Corporation Image processing apparatus, image apparatus, image processing method, and program for analyzing an input image of a camera
US20090322882A1 (en) * 2008-06-27 2009-12-31 Sony Corporation Image processing apparatus, image apparatus, image processing method, and program
US8208030B2 (en) * 2008-10-14 2012-06-26 Sony Corporation Method and unit for motion detection based on a difference histogram
US20100091126A1 (en) * 2008-10-14 2010-04-15 Sony Corporation Method and unit for motion detection based on a difference histogram
US20100157078A1 (en) * 2008-12-19 2010-06-24 Qualcomm Incorporated High dynamic range image combining
US8797421B2 (en) 2008-12-19 2014-08-05 Qualcomm Incorporated System and method to selectively combine images
US8339475B2 (en) 2008-12-19 2012-12-25 Qualcomm Incorporated High dynamic range image combining
US20100157079A1 (en) * 2008-12-19 2010-06-24 Qualcomm Incorporated System and method to selectively combine images
US8390719B2 (en) * 2009-02-20 2013-03-05 Ricoh Company, Ltd. Imaging device and image method
US20100214447A1 (en) * 2009-02-20 2010-08-26 Kei Itoh Imaging device and image method
KR20100108758A (en) * 2009-03-30 2010-10-08 삼성전자주식회사 A digital picturing device, a method for controlling the digital picturing device, and a computer-readable storage medium
US20100245600A1 (en) * 2009-03-30 2010-09-30 Chang Soon-Keun Digital photographing device, method of controlling the digital photographing device, and computer-readable storage medium
US8643728B2 (en) * 2009-03-30 2014-02-04 Samsung Electronics Co., Ltd. Digital photographing device, method of controlling the digital photographing device, and computer-readable storage medium for determining photographing settings based on image object motion
US20100271498A1 (en) * 2009-04-22 2010-10-28 Qualcomm Incorporated System and method to selectively combine video frame image data
US8111300B2 (en) 2009-04-22 2012-02-07 Qualcomm Incorporated System and method to selectively combine video frame image data
US8897525B2 (en) * 2010-03-30 2014-11-25 Siemens Aktiengesellschaft Multisegment picture reconstruction for cardio CT pictures
US20110243419A1 (en) * 2010-03-30 2011-10-06 Siemens Aktiengesellschaft Multisegment Picture Reconstruction For Cardio CT Pictures
JPWO2012093663A1 (en) * 2011-01-06 2014-06-09 株式会社ニコン Image processing apparatus, imaging apparatus, and image processing program
JP5949559B2 (en) * 2011-01-06 2016-07-06 株式会社ニコン Image processing apparatus, imaging apparatus, and image processing program
US8990536B2 (en) 2011-06-01 2015-03-24 Schneider Electric It Corporation Systems and methods for journaling and executing device control instructions
US9952103B2 (en) 2011-12-22 2018-04-24 Schneider Electric It Corporation Analysis of effect of transient events on temperature in a data center
WO2014182156A1 (en) * 2013-05-08 2014-11-13 Mimos Berhad A system and method for detecting an object
CN110363197A (en) * 2019-06-22 2019-10-22 东北电力大学 Based on the video area-of-interest exacting method for improving visual background extraction model

Similar Documents

Publication Publication Date Title
US6298144B1 (en) Device for and method of detecting motion in an image
US6628805B1 (en) Apparatus and a method for detecting motion within an image sequence
US5369599A (en) Signal metric estimator
Aach et al. Statistical model-based change detection in moving video
US5719954A (en) Stereo matching method and disparity measuring method
KR101271092B1 (en) Method and apparatus of real-time segmentation for motion detection in surveillance camera system
Hue et al. A particle filter to track multiple objects
KR100224752B1 (en) Target tracking method and apparatus
EP3054686B1 (en) Hierarchical motion estimation and video segmentation in presence of more than one moving object in a block
US20010036313A1 (en) Process for detecting a change of shot in a succession of video images
US20060245500A1 (en) Tunable wavelet target extraction preprocessor system
Hung et al. Speed up temporal median filter for background subtraction
Walha et al. Video stabilization for aerial video surveillance
Hung et al. A fast algorithm of temporal median filter for background subtraction.
Coimbra et al. Approximating optical flow within the MPEG-2 compressed domain
Smith et al. An improved power cepstrum based stereo correspondence method for textured scenes
Tzannes et al. Detection of point targets in image sequences by hypothesis testing: a temporal test first approach
JP2961272B1 (en) Object recognition apparatus and method using feature vector
Wong et al. Shared-use motion vector algorithm for moving objects detection for automobiles
Martinez et al. Implicit motion compensated noise reduction of motion video scenes
KR100213109B1 (en) Circuit for improving picture quality by using noise reduction and histogram equalization and method thereof
Fonseca et al. Design and implementation of an optical flow-based autonomous video surveillance system
EP0724728B1 (en) Signal metric estimator
EP1251689B1 (en) Process for detecting a change of shot in a succession of video images
Pan et al. A Kalman filter in motion analysis from stereo image sequences

Legal Events

Date Code Title Description
AS Assignment

Owner name: NATIONAL SECURITY AGENCY, GOVERNMENT OF THE UNITED

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PUCKER, LEONARD G., II;SOFGE, DAVID B.;REEL/FRAME:009212/0573

Effective date: 19980519

FPAY Fee payment

Year of fee payment: 4

REMI Maintenance fee reminder mailed
REIN Reinstatement after maintenance fee payment confirmed
LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FP Lapsed due to failure to pay maintenance fee

Effective date: 20091002

FEPP Fee payment procedure

Free format text: PETITION RELATED TO MAINTENANCE FEES FILED (ORIGINAL EVENT CODE: PMFP); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FEPP Fee payment procedure

Free format text: PETITION RELATED TO MAINTENANCE FEES GRANTED (ORIGINAL EVENT CODE: PMFG); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 8

SULP Surcharge for late payment
PRDP Patent reinstated due to the acceptance of a late maintenance fee

Effective date: 20100723

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20131002