|Publication number||US6587809 B2|
|Application number||US 09/979,370|
|Publication date||Jul 1, 2003|
|Filing date||Dec 15, 2000|
|Priority date||Jul 14, 1999|
|Also published as||US20020152040, WO2002048647A1|
|Publication number||09979370, 979370, PCT/2000/4810, PCT/GB/0/004810, PCT/GB/0/04810, PCT/GB/2000/004810, PCT/GB/2000/04810, PCT/GB0/004810, PCT/GB0/04810, PCT/GB0004810, PCT/GB004810, PCT/GB2000/004810, PCT/GB2000/04810, PCT/GB2000004810, PCT/GB200004810, US 6587809 B2, US 6587809B2, US-B2-6587809, US6587809 B2, US6587809B2|
|Original Assignee||Hypervision Limited|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (15), Referenced by (21), Classifications (18), Legal Events (8)|
|External Links: USPTO, USPTO Assignment, Espacenet|
1. Field of the Invention
This invention generally relates to systems for determining the position and orientation of an object which may deform in shape over time and which use the detection of energy emitted from markers placed on the object.
2. Description of the Related Art
As is known in the art, passive systems exist which rely on the markers being illuminated with energy that reflects off the markers and is detected by the sensor system. Active systems also exist in which the markers are individual sources of energy. In both cases the energy is focused onto spaced sensors, such that the position of an energised marker is identified by the sub set of adjacent sensor points that are recording an energy level above a given threshold.
By identifying which adjacent sensors are detecting energy above the threshold, the associated computing devices can estimate the position of the marker emitting the energy in a given plane in space since the focusing function relates a point in space to a sensor on the sensor system. To achieve a high resolution measurement from such systems a very large number of sensors need to be positioned adjacent to each other as each sensor relates to a point in space. Having a large number of sensors leads to a degradation in the capture rate as the signal levels must be digitised for a large number of sensors. By using three displaced sensor sets, the position of the marker can be calculated to a certain level of accuracy in 3 dimensional space.
In such systems energy from the marker is directly focused onto the sensors so that only a small number of sensors detect energy over the given threshold. Such systems do not measure the distribution of energy levels across a large percentage of the total number of sensors and do not calculate the position of the marker based on an energy distribution function for which a maximum value occurs for a calculated marker position.
In passive systems all illuminated markers are energised and detected simultaneously.
Therefore the computing device needs firstly to identify which sub set of adjacent sensors detecting energy above the given threshold correspond to which marker. Secondly it must track each marker from one sample to the next and attempt to distinguish each marker at all times. This results in the possibility of errors where marker assignments are lost and requires intensive processing methods.
Active systems may illuminate all markers at the same time or they can cycle the illumination of each marker to aid the computing system distinguish individual markers. If all markers illuminate at the same time, the computing device must be able to identify the correspondence of each marker and each energy detection and it must then track each marker in a similar way to the passive system.
In an active system that illuminates each marker individually, the computing device can immediately make the correspondence of marker energy emission and detection since the cycle time will be known. As each energy emission is recorded separately, no tracking is required and the position is simply calculated for each illumination. In one such system the sensor set is a multiple charge coupled device (CCD) onto which the energy is focused. To detect in 3D space at least three CCD detectors are used. In order to achieve high measurement resolution the CCD must have a large number of detecting sensors since the focusing function relates a point in space to each sensor point on the CCD. In order to achieve very high sample rates for a large number of markers the CCD must be driven at very high rates, well above the needs of conventional CCD devices. High resolution CCD devices capable of working at high data capture rates are expensive. In addition systems that use CCD devices have a measurement non-linearity dependent on the ability of the lens component to accurately focus the marker points linearly across the CCD sensor surface and not introduce any aberration.
As the CCD is moved further away from the markers on the object of interest, the measurement scaling changes since the focused image of the markers on the sensor system changes in size. Due to lens aberration and changes in measurement scaling such systems require a calibration phase in which an object of known dimensions is introduced for sampling. This calibration task is inconvenient, often needs experienced personnel to perform it and is considered a negative aspect from the point of view of end user.
In accordance with the present invention, a system for determining the position, orientation and deformation in 3 dimensional space of a moving object in real-time is provided having a plurality of activatable markers which are mounted onto parts of the object for which a position value needs to be recorded, a drive unit which drives the activatable markers in a defined sequence, a sensor section comprising a plurality of sensors remote from the markers and suitably arranged such that the energy falling on the sensor is dependent on the relative position of the energised marker and the sensor, a data capture unit which digitises the signals sensed by each sensor and a control unit which processes the signals received from the data capture unit.
The object may move in space and may deform in shape and the markers are mounted so as to move with the object and follow the shape deformation.
Each marker is activated individually for a period of time by the drive unit in a sequence known to the data capture unit and the control unit. While each marker is illuminated the energy from the marker is detected by all sensors. The sensors are arranged such that the energy distribution sensed by the plurality of markers for a single energised marker is a function of the position of the marker. The digitised energy levels are transmitted to the control unit at high speed and the information is processed to determine the position of the marker. The control unit calculates the position of the marker based on an energy distribution function for which a maximum value occurs for a calculated marker position.
By using this approach relatively few sensors are needed to determine marker position. This results in low digitisation and data collection overheads and therefore faster sample rates than if CCD devices were used. In addition by using a much lower number of sensor components significant cost reductions are achievable.
It is important to stress that the amplitude of the energy signal is not used to determine the position of the marker rather it is the energy distribution pattern over a number of sensors. For example the distance from the marker to a sensor is not calculated using the energy amplitude detected per se; in which case the emitter and sensor would need to be finely calibrated.
Since accurate signal strength values are unnecessary calibration of the energy emission and detection components is unnecessary.
Since the system relies on the way energy is distributed over the sensors, there is no need to calibrate the system for measurement scaling.
Since only one marker is activated at one time during a single cycle, the sensor section can individually determine the position of each marker as each marker is separately illuminated thereby making marker tracking unnecessary.
To reduce significantly the effect of external ambient energy radiation being superimposed upon the energy signal emitted by each marker the drive unit can split each specific marker illumination period into two parts. In the first part the marker is fully illuminated and the signal level detected is digitised. During a second part the marker is not illuminated and instead the data capture unit samples the ambient energy signal for each sensor and digitises it. Since the time interval between the two samples is very small the ambient energy level can be assumed to equal the subsequent ambient energy signal recorded and since the effect of ambient energy on marker energy detection can be considered to follow a simple superimposition rule, the final signal level attributed to the marker is the illuminated signal level less the ambient signal level. This subtraction can be performed by the data capture unit and the result can be transmitted to the control unit.
To deal with possible saturation of sensors due to the marker being too close to a group of sensors the drive unit can be designed to drive the marker with maximum illumination followed by a period of illumination at 50% of maximum, followed by the zero illumination level for the purposes of ambient energy detection. In cases where the maximum illumination level results in a saturation of the sensors, the data capture unit can globally choose to use the energy levels recorded during the time at which the drive unit illuminated the marker at 50% of maximum drive. In this way the system can automatically adapt to saturation of the sensors which may occur if the marker is positioned very close to the sensors in which case the signal values detected during the period of lower energy emission are used for the calculations.
In a preferred embodiment of the invention, the system includes: a plurality of infra red emitting markers that can be mounted to points of interest on the object; a drive unit which sequences the activation of the markers according to a synchronisation signal derived from the control unit; a sensor section, preferably a linear array of infra red sensors where each sensor is set approximately 75 mm apart and placed behind a linear collimator slot orthogonal to the linear array axis; a data capture section comprising a set of microprocessors which read the sensor levels and send the data to a control unit; a control unit comprising a processor to calculate the position of each marker and a data reception part for reception of the sensor level data from the data capture unit.
The sensors are provided with a collimator through which the energy may pass only up to a limited angle of incidence after which the sensor detects no energy. When the marker is directly positioned above a sensor the full energy of the marker is detected however as the marker moves away from the perpendicular axis adjoining the sensor, the energy level detected reduces as the collimator begins to attenuate the energy emitted by the marker.
The control unit activates the drive unit to begin the illumination of markers in a sequence known to the control unit. The sensor section detects the emitted energy and the data capture section digitises the levels and transmits them to the control unit. On reception the control unit analyses the data from a single sensor axis and calculates the most likely point along the length of the sensor axis which corresponds to the nearest point between the axis and the marker.
The control unit may receive data from several sensor axes, at least three, and will compute the 3D position of the marker in 3D space.
The control unit may display the information, store it or transmit it to another computing device for further use.
FIG. 1 is a block diagram of a system for determining the position in 3 dimensional space of a moving object in real-time according to the invention.
FIG. 2 is a diagram indicating how energy emitted from a marker in different positions falls onto a sensor masked with a slot collimator.
FIG. 3 is a graph depicting the intensity of energy received from each sensor arrangement where 5 sensor arrangements 1,2,3,4,5 are placed in a straight line and equidistant to each other.
FIGS. 4a-4 c are flow charts of the sub processes used by the system in FIG. 1 in determining the position in 3 dimensional space of a moving object in real-time according to the invention, and in particular:
FIG. 4a is the flow chart for the synchronisation, reception and co-ordinate processing by the control unit.
FIG. 4b is the flow chart for the sensor signal digitisation by the data capture unit.
FIG. 4c is the drive sequence flow chart for the illumination of markers. T is the total period of drive time specified for one marker.
FIG. 5 depicts the energy levels from a single sensor during a typical marker illumination sequence showing the levels recorded during maximum illumination, 50% of maximum and during ambient recording. T is the total period of drive time specified for one marker
Referring to FIG. 1 a system 1 for determining the position, orientation and deformation in three dimensional space of a moving object 2 is provided wherein a plurality of, in this instance five, active emitters or markers 3 a, 3 b, 3 c, 3 d, 3 e affixed to the object 2 are activated by a drive unit 4 following a set sequence, for example 3 a then 3 b then 3 c then 3 d then 3 e for each cycle, known to the control unit 5. The drive unit begins a drive cycle as a result of a synchronisation signal 6 sent by the control unit 5. As each marker is illuminated in turn by the drive unit, emitted energy from each marker is detected by a linear array of sensors 7 containing sensors 8 a, 8 b, 8 c, 8 d, 8 e, displaced by a distance depending on the resolution required but preferably 75 mm, and positioned behind collimating slots 9, which are arranged orthogonal to the array axis. The slots 9 a, 9 b, 9 c, 9 d, 9 e, act as barriers to energy arriving at an angle greater than Theta but pass energy arriving within this angle of incidence. As shown in FIG. 2, this arrangement ensures that a marker can be detected only if it is within a region of space running perpendicular to the linear array. It also ensures that the level of energy detected by adjacent sensors decreases on either side of the point along the length of the sensor axis which corresponds to the nearest point between the axis and the marker. The data capture unit 10, receives the synchronisation signal from the control unit and synchronises the digitisation of the energy levels detected by the sensors 8, and transmits them to the control unit 5. The control unit 5 calculates for each linear array and for each marker the position along the length of the sensor axis that corresponds to the nearest point between the axis and the marker.
The flow diagrams of FIGS. 4a, 4 b and 4 c, explains the processes involved in the control unit, the data capture unit and the drive unit.
In the preferred embodiment, the calculation of the position of a marker relative to a linear sensor array is performed by the control unit 5. The control unit receives energy signal information from all sensors in the array corresponding to an activated marker. FIG. 3 is a graph depicting the energy distribution in a typical linear sensor array of 5 sensors. The control unit processor loads the signal level values from each sensor into an array in memory and determines which sensors are registering a signal level above zero. Depending on the sensor arrangement the energy distribution function can be described by linear, quasi-linear or non-linear equations. In one embodiment the energy distribution is parabolic in shape and therefore the processor computes the coefficients of a quadratic equation that will best fit the energy distribution data.
Having determined the equation coefficients, the estimate for the marker position is calculated as the point for which the equation evaluates to a maximum.
Having calculated the position of the marker relative for the particular sensor array, the control unit 5 loads data relating to the next sensory array and applies the same algorithm to determine the position of the marker relative to that sensor array. Once all sensor array data has been processed for the marker of concern, sensor data relating to the next marker in the sequence can be processed. If more than one processor is used in the control unit, this processing may be shared across processors.
Once all marker positions have been calculated the marker position data may be further processed according to the needs of the application.
FIG. 5 depicts the sensor level value for one sensor during the activation of a single marker in time. The figure shows at 1, the signal level recorded when the marker is illuminated at maximum energy output, and shows at 2, the signal level recorded when the marker is illuminated at 50% of maximum energy and shows at 3, the signal level recorded when the marker is not illuminated at all. The data capture unit 10 samples and digitises the levels at each point 1,2 and 3 and stores these values. The capture unit 10 examines all sensor levels associated with the maximum energy emission timing and determines if any are higher than a level that would suggest at least one sensor was in saturation. If a single sensor is in saturation then the capture unit 10 will default to using only digitised sensor values recorded during the time that only 50% energy was emitted from the marker. In this way the capture unit can make a significant contribution to filtering data from saturated sensors which would degrade the marker position calculation accuracy.
In order to compensate for ambient radiation, the capture unit 10 will subtract from the saturation filtered energy signal, the level recorded during the time of zero illumination of the marker. Since the time interval between the two samples is very small, the ambient energy level can be assumed to equal the ambient energy present during the time that the marker was illuminated at either 100% or 50% of the maximum. Since the effect of ambient energy on marker energy detection can be considered to follow a simple superimposition rule, the final signal level attributed to the marker is the illuminated signal level less the ambient signal level. The capture unit 10 therefore performs the subtraction and stores the result for subsequent transmission to the control unit 5, for processing.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US4146924||Sep 22, 1975||Mar 27, 1979||Board Of Regents For Education Of The State Of Rhode Island||System for visually determining position in space and/or orientation in space and apparatus employing same|
|US4396945||Aug 19, 1981||Aug 2, 1983||Solid Photography Inc.||Method of sensing the position and orientation of elements in space|
|US5086404||Mar 27, 1991||Feb 4, 1992||Claussen Claus Frenz||Device for simultaneous continuous and separate recording and measurement of head and body movements during standing, walking and stepping|
|US5196900||Oct 9, 1990||Mar 23, 1993||Metronor A.S.||Method and sensor for opto-electronic angle measurements|
|US5828770||Feb 20, 1996||Oct 27, 1998||Northern Digital Inc.||System for determining the spatial position and angular orientation of an object|
|US5963891||Apr 24, 1997||Oct 5, 1999||Modern Cartoons, Ltd.||System for tracking body movements in a virtual reality system|
|US6324296 *||Dec 4, 1997||Nov 27, 2001||Phasespace, Inc.||Distributed-processing motion tracking system for tracking individually modulated light points|
|EP0162713A2||May 22, 1985||Nov 27, 1985||CAE Electronics Ltd.||Optical position and orientation measurement techniques|
|EP0753836A2||Jul 12, 1996||Jan 15, 1997||Sony Corporation||A three-dimensional virtual reality space sharing method and system|
|GB2002986A||Title not available|
|GB2280504A||Title not available|
|GB2289756A||Title not available|
|GB2348280A||Title not available|
|WO1994023647A1||Apr 22, 1994||Oct 27, 1994||Pixsys, Inc.||System for locating relative positions of objects|
|WO2000070304A1||May 15, 2000||Nov 23, 2000||Snap-On Technologies, Inc.||Active target wheel aligner|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US6801637 *||Feb 22, 2001||Oct 5, 2004||Cybernet Systems Corporation||Optical body tracker|
|US7264554||Jan 26, 2006||Sep 4, 2007||Bentley Kinetics, Inc.||Method and system for athletic motion analysis and instruction|
|US7729515||Oct 31, 2006||Jun 1, 2010||Electronic Scripting Products, Inc.||Optical navigation apparatus using fixed beacons and a centroid sensing device|
|US7826641||Sep 3, 2009||Nov 2, 2010||Electronic Scripting Products, Inc.||Apparatus and method for determining an absolute pose of a manipulated object in a real three-dimensional environment with invariant features|
|US7961909||Sep 18, 2009||Jun 14, 2011||Electronic Scripting Products, Inc.||Computer interface employing a manipulated object with absolute pose detection component and a display|
|US8368647 *||Aug 1, 2008||Feb 5, 2013||Ming-Yen Lin||Three-dimensional virtual input and simulation apparatus|
|US8553935||May 25, 2011||Oct 8, 2013||Electronic Scripting Products, Inc.||Computer interface employing a manipulated object with absolute pose detection component and a display|
|US8616989||Aug 7, 2007||Dec 31, 2013||K-Motion Interactive, Inc.||Method and system for athletic motion analysis and instruction|
|US9229540||Aug 22, 2011||Jan 5, 2016||Electronic Scripting Products, Inc.||Deriving input from six degrees of freedom interfaces|
|US9235934||Nov 24, 2014||Jan 12, 2016||Electronic Scripting Products, Inc.||Computer interface employing a wearable article with an absolute pose detection component|
|US9291562||Oct 10, 2013||Mar 22, 2016||Max-Planck-Gesellschaft Zur Foerderung Der Wissenschaften E.V.||Method and apparatus for tracking a particle, particularly a single molecule, in a sample|
|US9323055||May 26, 2006||Apr 26, 2016||Exelis, Inc.||System and method to display maintenance and operational instructions of an apparatus using augmented reality|
|US9324229 *||Jan 19, 2012||Apr 26, 2016||Exelis, Inc.||System and method to display maintenance and operational instructions of an apparatus using augmented reality|
|US20010024512 *||Feb 22, 2001||Sep 27, 2001||Nestor Yoronka||Optical body tracker|
|US20050105772 *||Oct 5, 2004||May 19, 2005||Nestor Voronka||Optical body tracker|
|US20060011805 *||May 27, 2003||Jan 19, 2006||Bernd Spruck||Method and device for recording the position of an object in space|
|US20060166737 *||Jan 26, 2006||Jul 27, 2006||Bentley Kinetics, Inc.||Method and system for athletic motion analysis and instruction|
|US20070211239 *||Oct 31, 2006||Sep 13, 2007||Electronic Scripting Products, Inc.||Optical navigation apparatus using fixed beacons and a centroid sensing device|
|US20070270214 *||Aug 7, 2007||Nov 22, 2007||Bentley Kinetics, Inc.||Method and system for athletic motion analysis and instruction|
|US20090033623 *||Aug 1, 2008||Feb 5, 2009||Ming-Yen Lin||Three-dimensional virtual input and simulation apparatus|
|US20120120070 *||Jan 19, 2012||May 17, 2012||Yohan Baillot||System and method to display maintenance and operational instructions of an apparatus using augmented reality|
|U.S. Classification||702/150, 382/103, 356/139.03, 700/259, 348/139, 382/107|
|International Classification||G01S5/16, G01S3/783, G01B11/03, G01B11/00, G01B11/16|
|Cooperative Classification||G01B11/00, G01B11/16, G01S5/163, G01B11/002, G01S3/783|
|European Classification||G01B11/00, G01B11/16|
|Nov 9, 2001||AS||Assignment|
Owner name: HYPERVISION LIMITED, UNITED KINGDOM
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MAJOE, DENNIS;REEL/FRAME:012532/0736
Effective date: 20010221
|May 19, 2006||AS||Assignment|
Owner name: ASCENSION TECHNOLOGY CORPORATION, VERMONT
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HYPERVISION LTD.;REEL/FRAME:017636/0624
Effective date: 20060412
|Aug 29, 2006||FPAY||Fee payment|
Year of fee payment: 4
|Dec 1, 2010||FPAY||Fee payment|
Year of fee payment: 8
|Aug 20, 2012||AS||Assignment|
Owner name: ROPER ASCENSION ACQUISITION, INC., FLORIDA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ASCENSION TECHNOLOGY CORPORATION;REEL/FRAME:028816/0923
Effective date: 20120531
Owner name: ASCENSION TECHNOLOGY CORPORATION, FLORIDA
Free format text: CHANGE OF NAME;ASSIGNOR:ROPER ASCENSION ACQUISITION, INC.;REEL/FRAME:028816/0920
Effective date: 20120531
|Feb 6, 2015||REMI||Maintenance fee reminder mailed|
|Jul 1, 2015||LAPS||Lapse for failure to pay maintenance fees|
|Aug 18, 2015||FP||Expired due to failure to pay maintenance fee|
Effective date: 20150701