|Publication number||USRE38420 E1|
|Application number||US 09/047,892|
|Publication date||Feb 10, 2004|
|Filing date||Aug 12, 1993|
|Priority date||Aug 12, 1992|
|Publication number||047892, 09047892, PCT/1993/1715, PCT/GB/1993/001715, PCT/GB/1993/01715, PCT/GB/93/001715, PCT/GB/93/01715, PCT/GB1993/001715, PCT/GB1993/01715, PCT/GB1993001715, PCT/GB199301715, PCT/GB93/001715, PCT/GB93/01715, PCT/GB93001715, PCT/GB9301715, US RE38420 E1, US RE38420E1, US-E1-RE38420, USRE38420 E1, USRE38420E1|
|Inventors||Graham Alexander Thomas|
|Original Assignee||British Broadcasting Corporation|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (20), Non-Patent Citations (7), Referenced by (49), Classifications (25), Legal Events (2)|
|External Links: USPTO, USPTO Assignment, Espacenet|
This invention relates to the derivation of information regarding the position of a television camera from image data acquired by the camera.
In television production, it is often required to video live action in the studio and electronically superimpose the action on a background image. This is usually done by shooting the action in front of a blue background and generating a ‘key’ from the video signal to distinguish between foreground and background. In the background areas, the chosen background image can be electronically inserted.
One limitation to this technique is that the camera in the studio cannot move, since this would generate motion of the foreground without commensurate background movement. One way of allowing the camera to move is to use a robotic camera mounting that allows a predefined camera motion to be executed, the same camera motion being used when the background images are shot. However the need for predefined motion places severe artistic limitations on the production process.
Techniques are currently under development that aim to be able to generate electronically background images that can be changed as the camera is moved so that they are appropriate to the present camera position. Thus a means of measuring the position of the camera in the studio is required. One way in which this can be done is to attach sensors to the camera to determine its position and angle of view; however the use of such sensors is not always practical.
The problem being addressed here is a method to derive the position and motion of the camera using only the video signal from the camera. Thus it can be used on an unmodified camera without special sensors.
The derivation of the position and motion of a camera by analysis of its image signal is a task often referred to as passive navigation; there are many examples of approaches to this problem in the literature, the more pertinent of which are as follows:
1. Brandt et al. 1990. Recursive motion estimation based on a model of the camera dynamics.
2.1. Brandt, A., Karmann, K., Lanser, S. Signal Processing V: Theories and Applications (Ed. Torres, L. et al.), Elsevir, pp. 959-962, 1990.
3.2. Buxton et al 1985 Machine perception of visual motion. Buxton, B. F., Buxton, H., Murray, D. W., Williams, N. S. GEC Journal of Research, Vol. 3 No. 3, pp. 145-161.
4.3. Netravali and Robbins 1979 Motion-compensated television coding: Part 1. Netravali, A. N., Robbins, J. D. Bell System Technical Journal Vol. 58, No. 3, Mar. 1979, pp. 631-670.
5.4. Thomas 1987 Television motion measurement for DATV and other applications. Thomas, G. A. BBC Research Department Report No. 1987/11.
6.5. Uomori et al. 1992 Electronic image stabilisation system for video cameras and VCRs. Uomori, K., Morimura, A., Ishii, J. SMPTE Journal, Vol. 101 No. 2, pp. 66-75, Feb. 1992.
7.6. Wu and Kittel Kittler 1990 Wu, S. F., Kittel, J. 1990 . A differential method for simultaneous estimation of rotation, change of scale and translation. Signal Processing: Image Communication 2, Elsevier, 1990, pp. 69-80.
For example, if a number of feature points can be identified in the image and their motion tracked from frame to frame, it is possible to calculate the motion of the camera relative to these points by solving a number of non-linear simultaneous equations [Buxton et al. 1985]. The tracking of feature points is often achieved by measuring the optical flow (motion) field of the image. This can be done in a number of ways, for example by using an algorithm based on measurements of the spariotemporal luminance gradient of the image [Netraveli Netravali and Robbins 1979].
A similar method is to use Kalman filtering techniques to estimate the camera motion parameters from the optical flow field and depth information [Brandt et al; 1990].
However, in order to obtain reliable (relative noise-free) information relating to the motion of the camera, it is necessary to have a good number of feature points visible at all times, and for these to be distributed in space in an appropriate manner. For example, if all points are at a relatively large distance from the camera, the effect of a camera pan (rotation of the camera about the vertical axis) will appear very similar to that of a horizontal translation at right angles to the direction of view. Points at a range of depth are thus required to distinguish reliably between these types of motion.
Simpler algorithms exist that allow a sub-set of camera motion parameters to be determined, while placing less constraints on the scene content. For example, measurement of horizontal and vertical image motions such a those caused by camera panning and tilting can be measured relatively simply for applications such as the steadying of images in hand-held cameras [Uomori et al. 1992].
In order to derive all required camera parameters (three spatial coordinates, pan and tilt angles and degree of zoom) from analysis of the camera images, a large number of points in the image would have to be identified and tracked. Consideration of the operational constraints in a TV studio suggested that providing an appropriate number of well-distributed reference points in the image would be impractical: markers would have to be placed throughout the scene at a range of different depths in such a way that at a significant number were always visible, regardless of the position of the camera or actors.
We have appreciated that measurements of image translation and scale change are relatively easy to make; from these measurements it is easy to calculate either
1. pan, tilt and zoom under the assumption that the camera is mounted on a fixed tripod: the scale change is a direct indication of the amount by which the degree of camera zoom has changed, and the horizontal and vertical translation indicate the change in pan and tilt angles; or
2. horizontal and vertical movement under the assumption that the camera is mounted in such a way that it can move in three dimensions (but cannot pan or tilt) and is looking in a direction normal to a planar background: the scale change indicates the distance the camera has moved along the optical axis and the image translation indicates how far the camera has moved normal to this axis.
This approach does not require special markers or feature points in the image, merely sufficient detail to allow simple estimation of global motion parameters. Thus it should be able to work with a wide range of picture material. All that is required is measurement of the initial focal length (or angle subtended by the field of view) and the initial position and angle of view of the camera.
The invention is defined by the independent claims to which reference should be made. Preferred features are set out in the dependent claims.
The approach described may be extended to more general situations (giving more freedom on the type of camera motion allowed) if other information such as image depth could be derived [Brandi et al. 1990]. Additional information from some sensors on the camera (for example to measure the degree of zoom) may allow more flexibility.
In order to allow the translation and scale change of the image to be measured, there must be sufficient detail present in the background of the image. Current practice is usually based upon the use of a blue screen background, to allow a key signal to be generated by analysing the RGB values of the video signal. Clearly, a plain blue screen cannot be used if camera motion information is to be derived from the image, since it contains no detail. Thus it will be necessary to use a background that contains markings of some sort, but is still of a suitable form to allow a key signal to be generated.
One form of background that is being considered is a ‘checkerboard’ of squares of two similar shades of blue, each closely resembling the blue colour used at present. This should allow present keying techniques to be used, while providing sufficient detail to allow optical flow measurements to be made. Such measurements could be made on a signal derived from an appropriate weighted sum of RGB values designed to accentuate the differences between the shades of blue.
The key signal may be used to remove foreground objects from the image prior to the motion estimation process. Thus the motion of foreground objects will not confuse the calculation.
FIG. 1 shows in block schematic form the basic arrangement of a camera motion estimator system embodying the invention;
FIG. 2 illustrates the relationship of measurement points in current and reference images;
FIG. 3 is a schematic view showing the displacement of a given measurement point from a reference image to the current image; and
FIG. 4 shows a checkerboard background.
The algorithm chosen for measuring global translation and scale change must satisfy the following criteria:
1. The chosen algorithm cannot be too computationally intensive, since it must run in real-time;
2. It must be capable of highly accurate measurements, since measurement errors will manifest themselves as displacement errors between foreground and background;
3. Measurement errors should not accumulate to a significant extent as the camera moves further away from its starting point.
Embodiment: Motion Estimation Followed by Global Motion Parameter Determination
An example of one type of algorithm that could be used is one based on a recursive spario-temporal gradient technique described in reference 4 [Netravali and Robbins 1979]. This kind of algorithm is known to be computationally efficient and to be able to measure small displacements to a high accuracy. Other algorithms based on block matching described in reference 6 [Uomori et al. 1992] or phase correlation described in reference 5 [Thomas 1987] may also be suitable.
The algorithm may be used to estimate the motion on a sample-by-sample basis between each new camera image and a stored reference image. The reference image is initially that viewed by the camera at the start of the shooting, when the camera is in a known position. Before each measurement, the expected translation and scale change is predicted from previous measurements and the reference image is subject to a translation and scale change by this estimated amount. Thus the motion estimation process need only measure the difference between the actual and predicted motion.
The motion vector field produced is analyzed to determine the horizontal and vertical displacement and scale change. This can be done by selecting a number of points in the vector field likely to have accurate vectors (for example in regions having both high image detail and uniform vectors). The scale change can be determined by examining the difference between selected vectors as a function of the spatial separation of the points. The translation can then be determined from the average values of the measured vectors after discounting the effect of the scale change. The measured values are added to the estimated values to yield the accumulated displacement and scale change for the present camera image.
More sophisticated methods of analysing the vector field could be added in future, for example in conjunction with means for determining the depth of given image points, to extend the flexibility of the system.
As the accumulated translation and scale change get larger, the translated reference image will begin to provide a poor approximation to the current camera image. For example, if the camera is panning to the right, picture material on the right of the current image will not be present in the reference image and so no motion estimate can be obtained for this area. To alleviate this problem, once the accumulated values exceed a given threshold the reference image is replaced by the present camera image. Each time this happens however, measurement errors will accumulate.
All the processing will be carried out on images that have been spatially filtered and subsampled. This will reduce the amount of computation required, with no significant loss in measurement accuracy. The filtering process also softens the image; this is known to improve the accuracy and reliability of gradient-type motion estimators. Further computational savings can be achieved by carrying out the processing between alternate fields rather than for every field; this will reduce the accuracy with which rapid acceleration can be tracked but this is unlikely to be a problem since most movements of studio cameras tend to be smooth.
Software to implement the most computationally-intensive pans of the processing has been written and benchmarked, to provide information to aid the specification and design of the hardware accelerator. The benchmarks showed that the process of filtering and down-sampling the incoming images is likely to use over half of the total computation time.
Embodiment 2: Direct Estimation of Global Motion Parameters An alternative and preferred method of determining global translation and scale change is to derive them directly from the video signal. A method of doing this is described in reference 7 by [Wu and Kittel Kittler 1990]. We have extended this method to work using a stored reference image and to use the predicted motion values as a staging point. Furthermore, the technique is applied only at a sub-set of pixels in the image, that we have termed measurement points, in order to reduce the computational load. As in the previous embodiment the RGB video signal is matrixed to form a single-component signal and spatially low-pass filtered prior to processing. As described previously, only areas identified by a key signal as background are considered.
The method is applied by considering a number of measurement points in each incoming image and the corresponding points in the reference image, displaced according to the predicted translation and scale change. These predicted values may be calculated, for example, by linear extrapolation of the measurements made in the preceding two images. The measurement points may be arranged as a regular array, as shown in FIG. 2. A more sophisticated approach would be to concentrate measurement points in areas of high luminance gradient, to improve the accuracy when a limited number of points are used. We have found that 500-1000 measurement points distributed uniformly yields good results. Points falling in the foreground areas (as indicated by the key signal) are discarded, since it is the motion of the background that it is to be determined.
At each measurement point, luminance gradients are calculated as shown in FIG. 3. These may be calculated, for example, by simply taking the difference between pixels either side of the measurement point. Spatial gradients are also calculated for the corresponding point in the reference image, offset by the predicted motion. Sub-pixel interpolation may be employed when calculating these values. The temporal luminance gradient is also calculated; again sub-pixel interpolation may be used in the reference image. An equation is formed relating the measured gradients to the motion values as follows:
Gradients are (approximately) related to displacement and scale changes by the equation
are the horizontal and vertical luminance gradients averaged between the two images:
gcx, gcy are horizontal and vertical luminance gradients in current image;
grx, gry are horizontal and vertical luminance gradients in reference image; and
gt is the temporal luminance gradient.
X and Y are the displacements between current and reference image and Z is the scale change (over and above those predicted).
An equation is formed for each measurement point and a least-squares solution is calculated to obtain values for X, Y, and Z.
Derivation of the equation may be found in reference 7 [ Wu and Kinel 1990] Kittler (this reference includes the effect of image rotation; we have omitted rotation since it is of little relevance here as studio cameras tend to be mounted such that they cannot rotate about the optic axis).
The set of simultaneous linear equations derived in this way (one for each measurement point) is solved using a standard least-squares solution method to yield estimates of the difference between the predicted and the actual translation and scale change. The calculated translation values are then added to the predicted values to yield the estimated translation between the reference image and the current image.
Similarly, the calculated and predicted scale changes are multiplied together to yield the estimated scale change. The estimated values thus calculated are then used to derive a prediction for the translation and scale change of the following image.
As described earlier, the reference image is updated when the camera has moved sufficiently far away from its initial position. This automatic refreshing process may be triggered, for example, when the area of overlap between the incoming and reference image goes below a given threshold. When assessing the area of overlap, the key signal needs to be taken account of, since for example an actor who obscured the left half of the background in the reference image might move so that he obscures the right half, leaving no visible background in common between incoming and reference images. One way of measuring the degree of overlap is to count the number of measurement points that are usable (ie. that fall in visible background areas of both the incoming and reference image). This number may be divided by the number of measurement points that were usable when the reference image was first used to obtain a measure of the usable image area as a fraction of the maximum area obtainable with that reference image. If the initial number of usable points in a given reference image was itself below a given threshold, this would indicate that most of the image was taken up with foreground rather than background, and a warning message should be produced.
It can also be advantageous to refresh the reference image if the measured scale change exceeds a given range (eg, if the camera zooms in a long way). Although in this situation the number of usable measurement points may be very high, the resolution of the stored reference image could become inadequate to allow accurate motion estimation.
When the reference image is updated, it can be retained in memory for future use, together with details of its accumulated displacement and scale change. When a decision is made that the current reference image is no longer appropriate, the stored images can be examined to see whether any of these gave a suitable view of the scene. This assessment can be carried out using similar criteria to those explained above. For example, if the camera pans to the left and then back to its starting position, the initial reference image may be re-used as the camera approachis this position. This ensures the measurements of camera orientation made at the end of the sequence will be as accurate as those made at the beginning.
Referring back to FIG. 1, apparatus for putting each of the two motion estimation methods into practice is shown. A camera 10, derives a video signal from the background 12 which, as described previously, may be patterned in two tones as shown in FIG. 4. The background cloth shown in FIG. 4 shows a two-tone arrangement of squares. Squares 30 of one tone are arranged adjacent sequences 32 of the other tone. Shapes other than squares may be used and it is possible to use more than two different tones. Moreover, the tones may differ in both hue and brightness or in either hue or brightness. At present, it is considered preferable for the brightness to be constant as variations in brightness might show in the final image.
Although the colour blue is the most common for the backcloth other colours, for example, green or orange are sometimes used when appropriate. The technique described is not peculiar to any particular background colour but requires a slight variation in hue and/or brightness between a number of different areas of the background, and that the areas adjacent to a given area have a brightness and/or hue different from that of the given area. This contrast enables motion estimation from the background to be performed.
Red, green and blue (RGB) colour signals formed by the camera are matrixed into a single colour signal and applied to a spatial low-pass filer (at 14). The low-pass output is applied to an image store 16 which holds the reference image data and whose output is transformed at 18 by applying the predicted motion for the image. The motion adjusted reference image data is applied, together with the low-pass filtered image to a unit 20 which measures the net motion in background areas between an incoming image at input I and a stored reference image at input R. The unit 20 applies one of the motion estimation algorithms described. The net motion measurement is performed under the control of a key signal K derived by a key generator 22 from the unfiltered RGB output from the camera 10 to exclude foreground portions of the image from the measurement. The motion prediction signal is updated on the basis of previous measured motion thus ensuring that the output from the image store 16 is accurately interpolated. When, as discussed previously, the camera has moved sufficiently away from its initial position a refresh signal 24 is sent from the net motion measurement unit 20 to the image store 16. On receipt of the refresh signal 24 a fresh image is stored in the image store and used as the basis for future net motion measurements.
The output from the net motion measurement unit 20 is used to derive an indication of current camera position and orientation as discussed previously.
Optionally, sensors 26 mounted on the camera can provide data to the net motion measurement unit 20 which augment or replace the image-derived motion signal.
The image store 16 may comprise a multi-frame store enabling storage of previous reference images as well as the current reference image.
The technique described can also be applied to image signals showing arbitrary picture material instead of just the blue background described earlier. If objects are moving in the scene, these can be segmented out by virtue of their motion rather than by using a chroma-key signal. The segmentation could be performed, for example, by discounting any measurement points for which the temporal luminance gradient (after compensating for the predicted background motion) was above a certain threshold. More sophisticated techniques for detecting motion relative to the predicted background motion can also be used.
It will be understood that the techniques described may be implemented either by special purpose digital signal processing equipment, by software in a computer, or by a combination of these methods. It will also be clear that the technique can be applied equally well to any television standard.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US4227212 *||Sep 21, 1978||Oct 7, 1980||Westinghouse Electric Corp.||Adaptive updating processor for use in an area correlation video tracker|
|US4393394 *||Aug 17, 1981||Jul 12, 1983||Mccoy Reginald F H||Television image positioning and combining system|
|US4739401 *||Jan 25, 1985||Apr 19, 1988||Hughes Aircraft Company||Target acquisition system and method|
|US4823184 *||Sep 8, 1987||Apr 18, 1989||Corporate Communications Consultants, Inc.||Color correction system and method with scene-change detection|
|US4984078 *||Sep 2, 1988||Jan 8, 1991||North American Philips Corporation||Single channel NTSC compatible EDTV system|
|US5018215 *||Mar 23, 1990||May 21, 1991||Honeywell Inc.||Knowledge and model based adaptive signal processor|
|US5049991 *||Feb 20, 1990||Sep 17, 1991||Victor Company Of Japan, Ltd.||Movement compensation predictive coding/decoding method|
|US5073819 *||Apr 5, 1990||Dec 17, 1991||Computer Scaled Video Surveys, Inc.||Computer assisted video surveying and method thereof|
|US5103305 *||Sep 26, 1990||Apr 7, 1992||Kabushiki Kaisha Toshiba||Moving object detecting system|
|US5195144 *||Feb 12, 1992||Mar 16, 1993||Thomson-Csf||Process for estimating the distance between a stationary object and a moving vehicle and device for using this process|
|US5351086 *||Dec 28, 1992||Sep 27, 1994||Daewoo Electronics Co., Ltd.||Low-bit rate interframe video encoder with adaptive transformation block selection|
|US5357281 *||Nov 3, 1992||Oct 18, 1994||Canon Kabushiki Kaisha||Image processing apparatus and terminal apparatus|
|EP0236519A1 *||Mar 8, 1986||Sep 16, 1987||ANT Nachrichtentechnik GmbH||Motion compensating field interpolation method using a hierarchically structured displacement estimator|
|EP0360698A1 *||Sep 22, 1989||Mar 28, 1990||THOMSON multimedia||Movement estimation process and device in a moving picture sequence|
|EP0366136A1 *||Oct 26, 1989||May 2, 1990||Canon Kabushiki Kaisha||Image sensing and processing device|
|GB2259625A *||Title not available|
|JPS5793788A||Title not available|
|WO1980001977A1 *||Feb 25, 1980||Sep 18, 1980||Western Electric Co||Technique for estimation of displacement and/or velocity of objects in video scenes|
|WO1990016131A1 *||Jun 15, 1990||Dec 27, 1990||Spaceward Limited||Digital video recording|
|WO1991020155A1 *||Jun 19, 1991||Dec 26, 1991||British Broadcasting Corporation||Video signal processing|
|1||*||A. N. Netravali et al., "Motion-Compensated Television Coding: Part I", The Bell System Technical Journal, vol. 58 #3, pp. 631-671, Mar. 1979.*|
|2||*||A. V. Brandt et al, "Recursive Motion Estimation Based on a Model of the Camera Dynamics" Signal Processing V: Theories and Applications; pp. 959-962, 1990.*|
|3||*||B. F. Buxton et al., "Machine Perception of Visual Motion", GEC Journal of Research, vol. 3 #3, pp. 145-161, 1985.*|
|4||*||G. A. Thomas, B.A. "Television Motion Measurement for DATV and Other Applications", The BBC Research Department, Report #198711, Engineering Division, pp. 1-20, Sep. 1987.*|
|5||*||K. Uomori et al. "Electronic Image Stabilization System for Video Cameras and VCRs", SMPTE Journal; #2 pp. 66-75, Feb. 1992.*|
|6||*||Rose, "Evaulation of a Motion Estimation Procedure for Low-Frame-Rate Aerial Images", ISCA '88 Journal, IEEE International Symposium on Circuits and Systems, vol. 3 pp. 2309-2312, 1988.|
|7||*||S. F. Wu et al., "A Differential Method for Simultaneous Estimation of Rotation, Change of Scale and Translation," Signal Processing: Image Communication 2; pp. 69-80, 1990.*|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US6912313 *||May 31, 2001||Jun 28, 2005||Sharp Laboratories Of America, Inc.||Image background replacement method|
|US7197074 *||Feb 20, 2003||Mar 27, 2007||The Regents Of The University Of California||Phase plane correlation motion vector determination method|
|US7379566 *||Jan 6, 2006||May 27, 2008||Gesturetek, Inc.||Optical flow based tilt sensor|
|US7382400||Feb 19, 2004||Jun 3, 2008||Robert Bosch Gmbh||Image stabilization system and method for a video camera|
|US7430312||Jan 9, 2006||Sep 30, 2008||Gesturetek, Inc.||Creating 3D images of objects by illuminating with infrared patterns|
|US7570805||Apr 23, 2008||Aug 4, 2009||Gesturetek, Inc.||Creating 3D images of objects by illuminating with infrared patterns|
|US7574020||Apr 7, 2008||Aug 11, 2009||Gesturetek, Inc.||Detecting and tracking objects in images|
|US7742077||Aug 9, 2005||Jun 22, 2010||Robert Bosch Gmbh||Image stabilization system and method for a video camera|
|US7822267||Jun 24, 2008||Oct 26, 2010||Gesturetek, Inc.||Enhanced object reconstruction|
|US7827698||Mar 28, 2008||Nov 9, 2010||Gesturetek, Inc.||Orientation-sensitive signal output|
|US7848542 *||Oct 31, 2007||Dec 7, 2010||Gesturetek, Inc.||Optical flow based tilt sensor|
|US7853041||Jan 6, 2006||Dec 14, 2010||Gesturetek, Inc.||Detecting and tracking objects in images|
|US7953271||Oct 26, 2010||May 31, 2011||Gesturetek, Inc.||Enhanced object reconstruction|
|US8015718||Nov 8, 2010||Sep 13, 2011||Qualcomm Incorporated||Orientation-sensitive signal output|
|US8144118||Jan 23, 2006||Mar 27, 2012||Qualcomm Incorporated||Motion-based tracking|
|US8170281||Aug 10, 2009||May 1, 2012||Qualcomm Incorporated||Detecting and tracking objects in images|
|US8212872||Jun 2, 2004||Jul 3, 2012||Robert Bosch Gmbh||Transformable privacy mask for video camera images|
|US8213686||Dec 6, 2010||Jul 3, 2012||Qualcomm Incorporated||Optical flow based tilt sensor|
|US8218858||May 27, 2011||Jul 10, 2012||Qualcomm Incorporated||Enhanced object reconstruction|
|US8230610||Sep 12, 2011||Jul 31, 2012||Qualcomm Incorporated||Orientation-sensitive signal output|
|US8483437||Mar 20, 2012||Jul 9, 2013||Qualcomm Incorporated||Detecting and tracking objects in images|
|US8559676||Dec 27, 2007||Oct 15, 2013||Qualcomm Incorporated||Manipulation of virtual objects using enhanced interactive system|
|US8670611||Oct 24, 2011||Mar 11, 2014||International Business Machines Corporation||Background understanding in video data|
|US8717288||Feb 28, 2012||May 6, 2014||Qualcomm Incorporated||Motion-based tracking|
|US8983139 *||Jun 25, 2012||Mar 17, 2015||Qualcomm Incorporated||Optical flow based tilt sensor|
|US9129380||Jan 21, 2014||Sep 8, 2015||International Business Machines Corporation||Background understanding in video data|
|US9210312||Aug 9, 2005||Dec 8, 2015||Bosch Security Systems, Inc.||Virtual mask for use in autotracking video camera images|
|US9234749||Jun 6, 2012||Jan 12, 2016||Qualcomm Incorporated||Enhanced object reconstruction|
|US9460349||Aug 11, 2015||Oct 4, 2016||International Business Machines Corporation||Background understanding in video data|
|US20020186881 *||May 31, 2001||Dec 12, 2002||Baoxin Li||Image background replacement method|
|US20040179594 *||Feb 20, 2003||Sep 16, 2004||The Regents Of The University Of California||Phase plane correlation motion vector determination method|
|US20050185058 *||Feb 19, 2004||Aug 25, 2005||Sezai Sablak||Image stabilization system and method for a video camera|
|US20050270372 *||Jun 2, 2004||Dec 8, 2005||Henninger Paul E Iii||On-screen display and privacy masking apparatus and method|
|US20050275723 *||Aug 9, 2005||Dec 15, 2005||Sezai Sablak||Virtual mask for use in autotracking video camera images|
|US20050280707 *||Aug 9, 2005||Dec 22, 2005||Sezai Sablak||Image stabilization system and method for a video camera|
|US20060177103 *||Jan 6, 2006||Aug 10, 2006||Evan Hildreth||Optical flow based tilt sensor|
|US20060188849 *||Jan 6, 2006||Aug 24, 2006||Atid Shamaie||Detecting and tracking objects in images|
|US20080137913 *||Oct 31, 2007||Jun 12, 2008||Gesture Tek, Inc.||Optical Flow Based Tilt Sensor|
|US20080166022 *||Dec 27, 2007||Jul 10, 2008||Gesturetek, Inc.||Manipulation Of Virtual Objects Using Enhanced Interactive System|
|US20080187178 *||Apr 7, 2008||Aug 7, 2008||Gesturetek, Inc.||Detecting and tracking objects in images|
|US20080199071 *||Apr 23, 2008||Aug 21, 2008||Gesturetek, Inc.||Creating 3d images of objects by illuminating with infrared patterns|
|US20080235965 *||Mar 28, 2008||Oct 2, 2008||Gesturetek, Inc.||Orientation-sensitive signal output|
|US20090003686 *||Jun 24, 2008||Jan 1, 2009||Gesturetek, Inc.||Enhanced object reconstruction|
|US20090295756 *||Aug 10, 2009||Dec 3, 2009||Gesturetek, Inc.||Detecting and tracking objects in images|
|US20110074974 *||Dec 6, 2010||Mar 31, 2011||Gesturetek, Inc.||Optical flow based tilt sensor|
|US20120268622 *||Jun 25, 2012||Oct 25, 2012||Qualcomm Incorporated||Optical flow based tilt sensor|
|US20130027548 *||Jul 28, 2011||Jan 31, 2013||Apple Inc.||Depth perception device and system|
|WO2006074290A2 *||Jan 6, 2006||Jul 13, 2006||Gesturetek, Inc.||Optical flow based tilt sensor|
|WO2006074290A3 *||Jan 6, 2006||May 18, 2007||Gesturetek Inc||Optical flow based tilt sensor|
|U.S. Classification||348/140, 348/25, 348/700, 348/703|
|International Classification||H04N7/26, G06T7/20, H04N5/14, G06T7/00|
|Cooperative Classification||G06T7/70, H04N19/53, H04N19/527, H04N19/54, G06T7/20, G06T2207/30244, G06K9/209, H04N5/144, H04N5/2224|
|European Classification||H04N5/222S, G06T7/00P, H04N7/26M2H, H04N7/26M2G, H04N7/26M2N2, G06T7/20, H04N5/14M, G06K9/20S|
|Jul 30, 1998||AS||Assignment|
Owner name: BRITISH BROADCASTING CORPORATION, UNITED KINGDOM
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:THOMAS, GRAHAM ALEXANDER;REEL/FRAME:009358/0784
Effective date: 19980625
|Jul 27, 2007||FPAY||Fee payment|
Year of fee payment: 12