Publication number | US8032327 B2 |

Publication type | Grant |

Application number | US 12/959,532 |

Publication date | Oct 4, 2011 |

Filing date | Dec 3, 2010 |

Priority date | Mar 11, 2005 |

Fee status | Paid |

Also published as | CA2600926A1, CA2600926C, CA2656163A1, CA2656163C, CN101189487A, CN101189487B, EP1877726A1, EP1877726A4, EP1877726B1, EP2230482A1, EP2230482B1, EP2278271A2, EP2278271A3, EP2278271B1, US7912673, US8140295, US20080201101, US20110074929, US20110074930, WO2006094409A1, WO2006094409B1 |

Publication number | 12959532, 959532, US 8032327 B2, US 8032327B2, US-B2-8032327, US8032327 B2, US8032327B2 |

Inventors | Patrick Hebert, Éric Saint-Pierre, Dragan Tubic |

Original Assignee | Creaform Inc. |

Export Citation | BiBTeX, EndNote, RefMan |

Patent Citations (21), Non-Patent Citations (12), Referenced by (5), Classifications (12), Legal Events (2) | |

External Links: USPTO, USPTO Assignment, Espacenet | |

US 8032327 B2

Abstract

A method for obtaining three-dimensional surface points of an object in an object coordinate system having two groups of steps. The method includes providing a set of target positioning features on the object. In a first group of steps, acquiring 2D first images of the object, extracting 2D positioning features; calculating a first set of calculated 3D positioning features; computing first transformation parameters, cumulating the first set of transformed 3D positioning features to provide and augment the set of reference 3D positioning features. In a second group of steps, providing a projected pattern on a surface of the object; acquiring 2D second images of the object, extracting 2D surface points and second sets of 2D positioning features; calculating a set of 3D surface points; calculating a second set of calculated 3D positioning features; computing second transformation parameters, transforming the 3D surface points into transformed 3D surface points.

Claims(5)

1. A method for obtaining three-dimensional surface points of an object in an object coordinate system, comprising:

providing a set of target positioning features on said object, each of said target positioning features being provided at a fixed position on said object, said object coordinate system being defined using said target positioning features, said target positioning features being provided by one of a set of external fixed projectors projecting said features and fixedly secured features;

in a first group of steps:

acquiring at least a pair of 2D first images of said object by at least a pair of cameras, at least a first portion of said set of target positioning features being apparent on said pair of first images, each said pair of said 2D first images being acquired from a corresponding first pair of viewpoints referenced in a sensing device coordinate system;

using an electronic chip in communication with said at least said pair of cameras for implementing the steps of:

extracting, from said 2D first images, at least two first sets of 2D positioning features from a reflection of said target positioning features of said first portion on said surface;

calculating a first set of calculated 3D positioning features in said sensing device coordinate system using said first sets of 2D positioning features;

computing first transformation parameters for characterizing a current spatial relationship between said sensing device coordinate system and said object coordinate system, by matching corresponding features between said first set of calculated 3D positioning features in said sensing device coordinate system and a set of reference 3D positioning features in said object coordinate system, said reference 3D positioning features being cumulated from previous observations;

transforming said first set of calculated 3D positioning features into a first set of transformed 3D positioning features in said object coordinate system using said first transformation parameters;

cumulating said first set of transformed 3D positioning features to provide and augment said set of reference 3D positioning features;

in a second group of steps:

providing a projected pattern on a surface of said object using a pattern projector;

acquiring at least a pair of 2D second images of said object by said cameras, said projected pattern and at least a second portion of said set of target positioning features being apparent on said pair of second images, each said pair of said 2D second images being acquired from a corresponding second pair of viewpoints referenced in said sensing device coordinate system;

wherein said first portion of said set of target positioning features and said second portion of said set of target positioning features being one of the same, partly the same or different;

using said electronic chip for implementing the steps of:

extracting, from said 2D second images, at least one set of 2D surface points from a reflection of said projected pattern on said surface, and at least two second sets of 2D positioning features from a reflection of said target positioning features of said second portion on said surface;

calculating a set of 3D surface points in said sensing device coordinate system using said set of 2D surface points;

calculating a second set of calculated 3D positioning features in said sensing device coordinate system using said second sets of 2D positioning features;

computing second transformation parameters for characterizing a current spatial relationship between said sensing device coordinate system and said object coordinate system, by matching corresponding features between said second set of calculated 3D positioning features in said sensing device coordinate system and said set of reference 3D positioning features in said object coordinate system;

transforming said set of 3D surface points into a set of transformed 3D surface points in said object coordinate system using said second transformation parameters.

2. The method as claimed in claim 1 , further comprising, in said second group of steps:

transforming said second set of calculated 3D positioning features into a second set of transformed 3D positioning features in said object coordinate system using said second transformation parameters;

cumulating said second set of transformed 3D positioning features to provide and augment said set of reference 3D positioning features.

3. The method as claimed in claim 1 , wherein, in said second group of steps, said electronic chip further implements the step of cumulating said set of transformed 3D surface points to provide a 3D surface model of said object.

4. The method as claimed in claim 1 , wherein said positioning features are fixedly secured on said surface of said object.

5. The method as claimed in claim 4 , wherein said positioning features are retro-reflective targets, said method further comprising illuminating at least part of said set of positioning features by a light source.

Description

The present application is a continuation of U.S. patent application Ser. No. 11/817,300 filed Aug. 28, 2007 by Applicant, now U.S. Pat. No. 7,912,673, which is a national phase entry of PCT patent application no. PCT/CA06/00370 filed on Mar. 13, 2006, which in turns claims priority benefit on U.S. provisional patent application No. 60/660,471 filed Mar. 11, 2005, the specifications of which are hereby incorporated by reference.

The present invention generally relates to the field of three-dimensional scanning of an object's surface geometry, and, more particularly, to a portable three-dimensional scanning apparatus for hand-held operations.

Three-dimensional scanning and digitization of the surface geometry of objects is now commonly used in many industries and services and their applications are numerous. A few examples of such applications are: inspection and measurement of shape conformity in industrial production systems, digitization of clay models for industrial design and styling applications, reverse engineering of existing parts with complex geometry, interactive visualization of objects in multimedia applications, three-dimensional documentation of artwork and artifacts, human body scanning for better orthesis adaptation or biometry.

There remains a need to improve the scanning devices used for 3D scanning of an object.

According to one broad aspect, there is provided a method for obtaining three-dimensional surface points of an object in an object coordinate system having two groups of steps. The method first comprises providing a set of target positioning features on the object, each of the target positioning features being provided at a fixed position on the object, the object coordinate system being defined using the target positioning features, the target positioning features being provided by one of a set of external fixed projectors projecting the features and fixedly secured features.

In one embodiment, the method comprises, in a first group of steps acquiring at least a pair of 2D first images of the object by at least a pair of cameras, at least a first portion of the set of target positioning features being apparent on the pair of first images, each pair of the 2D first images being acquired from a corresponding first pair of viewpoints referenced in a sensing device coordinate system; using an electronic chip in communication with the at least the pair of cameras for implementing the steps of: extracting, from the 2D first images, at least two first sets of 2D positioning features from a reflection of the target positioning features of the first portion on the surface; calculating a first set of calculated 3D positioning features in the sensing device coordinate system using the first sets of 2D positioning features; computing first transformation parameters for characterizing a current spatial relationship between the sensing device coordinate system and the object coordinate system, by matching corresponding features between the first set of calculated 3D positioning features in the sensing device coordinate system and a set of reference 3D positioning features in the object coordinate system, the reference 3D positioning features being cumulated from previous observations; transforming the first set of calculated 3D positioning features into a first set of transformed 3D positioning features in the object coordinate system using the first transformation parameters; cumulating the first set of transformed 3D positioning features to provide and augment the set of reference 3D positioning features.

In one embodiment, the method comprises, in a second group of steps, providing a projected pattern on a surface of the object using a pattern projector; acquiring at least a pair of 2D second images of the object by the cameras, the projected pattern and at least a second portion of the set of target positioning features being apparent on the pair of second images, each pair of the 2D second images being acquired from a corresponding second pair of viewpoints referenced in the sensing device coordinate system; wherein the first portion of the set of target positioning features and the second portion of the set of target positioning features being one of the same, partly the same and different; using the electronic chip for implementing the steps of: extracting, from the 2D second images, at least one set of 2D surface points from a reflection of the projected pattern on the surface, and at least two second sets of 2D positioning features from a reflection of the target positioning features of the second portion on the surface; calculating a set of 3D surface points in the sensing device coordinate system using the set of 2D surface points; calculating a second set of calculated 3D positioning features in the sensing device coordinate system using the second sets of 2D positioning features; computing second transformation parameters for characterizing a current spatial relationship between the sensing device coordinate system and the object coordinate system, by matching corresponding features between the second set of calculated 3D positioning features in the sensing device coordinate system and the set of reference 3D positioning features in the object coordinate system; transforming the set of 3D surface points into a set of transformed 3D surface points in the object coordinate system using the second transformation parameters.

According to another broad aspect, there is provided a method for obtaining three-dimensional surface points of an object in an object coordinate system having two groups of steps. The method first comprises providing a set of target positioning features on the object. In a first group of steps, acquiring at least a pair of 2D first images of the object, at least a first portion of the set of target positioning features being apparent on the pair of first images, extracting, from the 2D first images, at least two first sets of 2D positioning features from a reflection of the target positioning features of the first portion on the surface; calculating a first set of calculated 3D positioning features in the sensing device coordinate system using the first sets of 2D positioning features; computing first transformation parameters for characterizing a current spatial relationship between the sensing device coordinate system and the object coordinate system, cumulating the first set of transformed 3D positioning features to provide and augment the set of reference 3D positioning features. In a second group of steps, providing a projected pattern on a surface of the object using a pattern projector; acquiring at least a pair of 2D second images of the object by the cameras, the projected pattern and at least a second portion of the set of target positioning features being apparent on the pair of second images, extracting, from the 2D second images, at least one set of 2D surface points from a reflection of the projected pattern on the surface, and at least two second sets of 2D positioning features from a reflection of the target positioning features of the second portion on the surface; calculating a set of 3D surface points in the sensing device coordinate system using the set of 2D surface points; calculating a second set of calculated 3D positioning features in the sensing device coordinate system using the second sets of 2D positioning features; computing second transformation parameters for characterizing a current spatial relationship between the sensing device coordinate system and the object coordinate system, transforming the set of 3D surface points into a set of transformed 3D surface points in the object coordinate system using the second transformation parameters.

According to another broad aspect, there is provided a method for obtaining three-dimensional surface points of an object in an object coordinate system. The method comprises providing a projected pattern on a surface of the object using a pattern projector; providing a set of target positioning features on the object, each of the target positioning features being provided at a fixed position on the object, the object coordinate system being defined using the target positioning features, the target positioning features being provided by one of a set of external fixed projectors projecting the features and affixed features; acquiring at least a pair of 2D images of the object by at least a pair of cameras with a known spatial relationship, the projected pattern and at least a portion of the set of target positioning features being apparent on the images, each of the 2D images being acquired from a view point referenced in a sensing device coordinate system; using an electronic chip in communication with the at least the pair of cameras for implementing the steps of: extracting, from the 2D images, at least one set of 2D surface points from a reflection of the projected pattern on the surface, and at least two sets of 2D positioning features from a reflection of the target positioning features on the surface; calculating a set of 3D surface points in the sensing device coordinate system using the set of 2D surface points; calculating a set of calculated 3D positioning features in the sensing device coordinate system using the sets of 2D positioning features; computing transformation parameters for characterizing a current spatial relationship between the sensing device coordinate system and the object coordinate system, by matching corresponding features between the set of calculated 3D positioning features in the sensing device coordinate system and a set of reference 3D positioning features in the object coordinate system, the reference 3D positioning features being cumulated from previous observations; and transforming the set of 3D surface points into a set of transformed 3D surface points in the object coordinate system using the transformation parameters.

In one embodiment, the electronic chip further implements the steps of: transforming the set of calculated 3D positioning features into a set of transformed 3D positioning features in the object coordinate system using the transformation parameters; and cumulating the set of transformed 3D positioning features to provide and augment the set of reference 3D positioning features.

According to another broad aspect, there is provided a system for obtaining three-dimensional surface points of an object in an object coordinate system. The system comprises a set of target positioning features on the object, each of the target positioning features being provided at a fixed position on the object, the object coordinate system being defined using the target positioning features; a sensing device having a pattern projector for providing a projected pattern on a surface of the object, at least a pair of cameras each for acquiring at least one 2D image of the object, each of the 2D images being acquired from a viewpoint referenced in a sensing device coordinate system; an image processor for extracting, from the 2D images, at least one set of 2D surface points from a reflection of the projected pattern on the surface apparent on the images, and at least two sets of 2D positioning features from a reflection of at least a portion of the target positioning features on the surface apparent on the images; a 3D surface point calculator for calculating a set of 3D surface points in the sensing device coordinate system using the set of 2D surface points; a 3D positioning feature calculator for calculating a set of calculated 3D positioning features in the sensing device coordinate system using the sets of 2D positioning features; a positioning features matcher for computing transformation parameters to characterize a current spatial relationship between the sensing device coordinate system and the object coordinate system, by matching corresponding features between the set of calculated 3D positioning features in the sensing device coordinate system and a set of reference 3D positioning features in the object coordinate system, the reference 3D positioning features being cumulated from previous observations; a 3D surface point transformer for transforming the set of 3D surface points into a set of transformed 3D surface points in the object coordinate system using the transformation parameters.

Having thus generally described the nature of the invention, reference will now be made to the accompanying drawings, showing by way of illustration a preferred embodiment thereof, and in which:

The present device allows simultaneously scanning and modeling the object's surface while accumulating a second model of the positioning features in real-time using a single hand-held sensor. Furthermore, by fixing additional physical targets as positioning features on an object, it is possible to hold the object in one hand while holding the scanner in the second hand without depending on the object's surface geometry for the quality of the calculated sensor positions.

Referring to **10**.

Sensing Device

The system comprises a sensing device **12** described in more details thereafter in this description. The sensing device **12** collects and transmits a set of images **13**, namely a frame, of the observed scene to an image processor **14**. These images are collected from at least two viewpoints where each of these viewpoints has its own center of projection. The relevant information encompassed in the images results from the laser projection pattern reflected on the object's surface as well as positioning features that are used to calculate the relative position of the sensing device with respect to other frame captures. Since all images in a given frame, are captured simultaneously and contain both positioning and surface measurements, synchronisation of positioning and surface measurement is implicit.

The positioning features are secured on the object such that the object can be moved in space while the positioning features stay still on the object and, accordingly, with respect to the object's coordinate system. It allows the object to be moved in space while its surface is being scanned by the sensing device.

Image Processor

The image processor **14** extracts positioning features and surface points from each image. For each image, a set of 2D surface points **15** and a second set of observed 2D positioning features **21** are output. These points and features are identified in the images based on their intrinsic characteristics. Positioning features are either the trace of isolated laser points or circular retro-reflective targets. The pixels associated with these features are contrasting with respect to the background and may be isolated with simple image processing techniques before estimating their position using centroïd or ellipse fitting (see E. Trucco and A. Verri, “Introductory techniques for 3-D computer vision”, Prentice Hall, 1998, p. 101-108). Using circular targets allows one to extract surface normal orientation information from the equation of the fitted ellipse, therefore facilitating sensing device positioning. The sets of surface points are discriminated from the positioning features since the laser pattern projector produces contrasting curve sections in the images and thus presenting a different 2D shape. The image curve sections are isolated as single blobs and for each of these blobs, the curve segment is analyzed for extracting a set of points on the curve with sub-pixel precision. This is accomplished by convolving a differential operator across the curve section and interpolating the zero-crossing of its response.

For a crosshair laser pattern, one can benefit from the architecture of the apparatus described thereafter. In this configuration with two cameras and a crosshair pattern projector, the cameras are aligned such that one among the two laser planes produces a single straight line in each camera at a constant position. This is the inactive laser plane for a given camera. These inactive laser planes are opposite for both cameras. This configuration, proposed by Hébert (see P. Hébert, “A Self-Referenced Hand-Held Range Sensor”. in proc. of the 3rd International Conference on 3D Digital Imaging and Modeling (3DIM 2001), 28 May-1 Jun. 2001, Quebec City, Canada, pp. 5-12) greatly simplifies the image processing task. It also simplifies the assignation of each set of 2D surface point to a laser plane of the crosshair.

While the sets of surface points **15** follow one path in the system to recover the whole scan of the surface geometry, the sets of observed 2D positioning features **21** follow a second path and are used to recover the relative position of the sensing device with respect to the object's surface. However, these two types of sets are further processed for obtaining 3D information in the sensing device coordinate system.

3D Positioning Features Calculator

Since the sensing device is calibrated, matched positioning features between camera viewpoints are used to estimate their 3D position using the 3D positioning features calculator **22**. The sets of observed 2D positioning features are matched using the epipolar constraint to obtain non ambiguous matches. The epipolar lines are calculated using the fundamental matrix that is calculated from the calibrated projection matrices of the cameras. Then, from the known projection matrices of the cameras, triangulation is applied to calculate a single set of calculated 3D positioning features in the sensing device coordinate system **23**. This set of points will be fed to the positioning features matcher for providing the observation on the current state of the sensing device, and to the 3D positioning features transformer for an eventual update of the reference 3D positioning features in the object coordinate system.

3D Surface Point Calculator

The 3D surface point calculator **16** takes as input the extracted sets of 2D surface points **15**. These points can be associated with a section of the laser projected pattern, for instance one of the two planes for the crosshair pattern. When the association is known, each of the 2D points can be transformed into a 3D point in the sensing device coordinate system by intersecting the corresponding cast ray and the equation of the laser plane. The equation of the ray is obtained from the projection matrix of the associated camera. The laser plane equation is obtained using a pre-calibration procedure (see P. Hébert, “A Self-Referenced Hand-Held Range Sensor”. in proc. of the 3rd International Conference on 3D Digital Imaging and Modeling (3DIM 2001), 28 May-1 Jun. 2001, Quebec City, Canada, pp. 5-12) or exploiting a table look-up after calibrating the sensing device with an accurate translation stage for instance. Both approaches are adequate. In the first case, the procedure is simple and there is no need for sophisticated equipment but it requires a very good estimation of the cameras' intrinsic and extrinsic parameters.

It is also possible to avoid associating each 2D point to a specific structure of the laser pattern. This is particularly interesting for more complex or general patterns. In this case, it is still possible to calculate 3D surface points using the fundamental matrix and exploiting the epipolar constraint to match points. When this can be done without ambiguity, triangulation can be calculated in the same way it is applied by the 3D positioning features calculator **22**.

The 3D surface point calculator **16** thus outputs a set of calculated 3D surface points in the sensing device coordinate system **17**. This can be an unorganized set or preferably, the set is organized such that 3D points associated with connected segments in the images are grouped for estimating 3D curve tangent by differentiation. This information can be exploited by the surface reconstructor for improved quality of the recovered surface model **31**.

Positioning Subsystem

The task of the positioning subsystem, mainly implemented in the positioning features matcher **24** and in the reference positioning features builder **28**, is to provide transformation parameters **25** for each set of calculated 3D surface points **17**. These transformation parameters **25** make it possible to transform calculated 3D surface points **17** into a single, object coordinate system while preserving the structure; the transformation is rigid. This is accomplished by building and maintaining a set of reference 3D positioning features **29** in the object coordinate system. The positioning features can be a set of 3D points, a set of 3D points with associated surface normal or any other surface characteristic. In this preferred embodiment it is assumed that all positioning features are 3D points, represented as column vectors [x, y, z]^{T }containing three components denoting the position of the points along the three coordinate axes.

At the beginning of a scanning session, the set of reference 3D positioning features **29** is empty. As the sensing device **12** provides the first measurements and the system calculates sets of calculated 3D positioning features **23**, the features are copied into the set of reference 3D positioning features **29** using the identity transformation. This set thus becomes the reference set for all subsequent sets of reference 3D positioning features **29** and this first sensing device position defines the object coordinate system into which all 3D surface points are aligned.

After creation of the initial set of reference 3D positioning features **29**, subsequent sets of calculated 3D positioning features **23** are first matched against the reference set **29**. The matching operation is divided into two tasks: i) finding corresponding features between the set of calculated 3D positioning features in the sensing device coordinate system **23** and the set of reference 3D features in the object coordinate system, and ii) computing the transformation parameters **25** of the optimal rigid 3D transformation that best aligns the two sets. Once the parameters have been computed, they are used to transform both calculated 3D positioning features **23** and calculated 3D surface points **17** thus aligning them into the object coordinate system.

The input to the positioning features matcher **24** are the set of reference 3D positioning features **29**, R, the set of calculated 3D positioning features **23**, O, along with two sets of observed 2D positioning features **21**, P_{1 }and P_{2 }which were also used by the 3D positioning features calculator **22**, as explained above. Matching these sets is the problem of finding two subsets O_{m} __⊂__O and R_{m} __⊂__R, containing N features each, such that all pairs of points (o_{i},r_{i}) with o_{i}εO_{m }and r_{i}εR_{m }represent the same physical features. Finding these subsets is accomplished by finding the maximum number of segments of points ( _{i}o_{j} _{i}r_{j}

|∥*o* _{i} *−o* _{j} *∥−∥r* _{i} *−r* _{j}∥|≦ε for all *i,jε{*1*, . . . ,N},i≠j,* (1)

where ε is a predefined threshold which is set to correspond to the accuracy of the sensing device. This constraint imposes that the difference in distance between a corresponding pair of points in the two sets be negligible.

This matching operation is solved as a combinatorial optimization problem where each segment of points from the set O is progressively matched against each segment of points in the set R. Each matched segment is then expanded by forming an additional segment using the remaining points in each of the two sets. If two segments satisfy the constraint (1), a third segment is formed and so on as long as the constraint is satisfied. Otherwise the pair is discarded and the next one is examined. The solution is the largest set of segments satisfying (1). Other algorithms (see M. Fischler and R. Bolles, (1981) “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography”, Communications of the Assoc. for Computing Machinery, (June 1981), vol. 24, no. 6, pp. 381-395.) can be used for the same purpose.

As long as the number of elements in the set of reference 3D positioning features **29** is relatively low (typically less than fifteen), the computational complexity of the above approach is acceptable for real-time operation. In practice however, the number of reference 3D positioning features **29** can easily reach several hundreds. Since the computational complexity grows exponentially with the number of features, the computation of corresponding features becomes too slow for real-time applications. The problem is solved by noting that the number of positioning features that are visible from any particular viewpoint is small, being limited by the finite field of view of the sensing device.

This means that if the calculated 3D positioning features **23** can be matched against reference 3D positioning features **29**, then the matched features from the reference set are located in a small neighborhood whose size is determined by the size of the set of calculated 3D positioning features **23**. This also means that the number of points in this neighborhood should be small as well (typically less than fifteen). To exploit this property for accelerating matching, the above method is modified as follows. Prior to matching, a set of neighbouring features [N_{i}] is created for each reference feature. After the initial segment of points is matched, it is expanded by adding an additional segment using only points in the neighborhood set [N_{i}] of the first matched feature. By doing so, the number of points used for matching remains low regardless of the size of the set of reference 3D positioning features **29**, thus preventing an exponential growth of the computational complexity.

Alternatively, exploiting spatial correlation of sensing device position and orientation can be used to improve matching speed. By assuming that the displacement of the sensing device is small with respect to the size of the set of positioning features, matching can be accomplished by finding the closest reference feature for each observed positioning feature. The same principle can be used in 2D, that is, by finding closest 2D positioning features.

Once matching is done, the two sets need to be aligned by computing the optimal transformation parameters [M T], in the least-squares sense, such that the following cost function is minimized:

The transformation parameters consist of a 3×3 rotation matrix M and a 3×1 translation vector T. Such a transformation can be found using dual quaternions as described in M. W. Walker, L. Shao and R. A. Volz, “Estimating 3-D location parameters using dual number quaternions”, CVGIP: Image Understanding, vol. 54, no. 3, November 1991, pp. 358-367. In order to compute this transformation, at least three common positioning features have to be found. Otherwise both positioning features and surface points are discarded for the current frame.

An alternative method for computing the rigid transformation is to minimize the distance between observed 2D positioning features **21** and the projections of reference 3D positioning features **29**. Using the perspective projection transformation Π, the rigid transformation [M T] that is optimal in the least-squares sense is the transform that minimizes:

where p_{i}εP_{1 }or p_{i}εP_{2 }are observed 2D features that correspond to the 3D observed feature o_{i}εO_{m}. The rigid transformation [M T] can be found by minimizing the above cost function using an optimization algorithm such as the Levenberg-Marquardt method.

3D Positioning Features Transformer

Once the rigid transformation is computed, the 3D positioning features transformer **26** transforms the set of calculated 3D positioning features from the sensing device coordinate system **23** to the object coordinate system **27**. The transformed 3D positioning features are used to update the set of reference 3D positioning features **29** in two ways. First, if only a subset of observed features has been matched against the set of reference 3D positioning features **29**, the unmatched observed features represent newly observed features that are added to the reference set. The features that have been re-observed and matched can be either discarded (since they are already in the reference set) or used to improve, that is, filter the existing features. For example, all observations of the same feature can be summed together in order to compute the average feature position. By doing so, the variance of the measurement noise is reduced thus improving the accuracy of the positioning system.

3D Surface Point Transformer

The processing steps for the surface points are simple once the positioning features matcher **24** makes the transformation parameters **25** available. The set of calculated 3D surface points in the sensing device coordinate system **17** provided by the 3D surface point calculator **16** are then transformed by the 3D surface point transformer **18** using the same transformation parameters **25** provided by the positioning features matcher **24**, which is the main link of information between the positioning subsystem and the integration of surface points in the object coordinate system. The resulting set of transformed 3D surface points in the object coordinate system **19** is thus naturally aligned in the same coordinate system with the set of reference 3D positioning features **29**. The final set of 3D surface points **19** can be visualized or preferably fed to a surface reconstructor **20** that estimates a continuous non-redundant and possibly filtered surface representation **31** that is displayed, on a user interface display **30**, optionally with the superimposed set of reference 3D positioning features **29**.

Having described the system, a closer view of the sensing device is now detailed. **40** that is used in this preferred embodiment of the system. The device comprises two objectives and light detectors **46** that are typically progressive scan digital cameras. The two objectives and light detectors **46** have their centers of projection separated by a distance D**1** **52**, namely the baseline, and compose a passive stereo pair of light detectors. The laser pattern projector **42** is preferably positioned at a distance D**3** **56** from the baseline of the stereo pair to compose a compact triangular structure leading to two additional active sensors, themselves composed in the first case by the left camera and the laser pattern projector and, in the second case by the right camera and the laser pattern projector. For these two additional active stereo pairs, the baseline D**2** **54** is depicted in the figure.

In **50** distributed around the light detectors **46**. These LEDs illuminate retro-reflective targets that are used as positioning features. The LEDs are preferably positioned as close as possible to the optical axes of the cameras in order to capture a stronger signal from the retro-reflective targets. Interference filters **48** are mounted in front of the objectives. These filters attenuate all wavelengths except for the laser wavelength that is matched to the LEDs'wavelength. This preferred triangular structure is particularly interesting when D**3** **56** is such that the triangle is isosceles with two 45 degree angles and a 90 degree angle between the two laser planes of the crosshair **44**. With this particular configuration, the crosshair pattern is oriented such that each plane is aligned with both the center of projection of each camera as well as with the center of the light detectors. This corresponds to the center epipolar line where the main advantage is that one laser plane (the inactive plane) will always be imaged as a straight line at the same position in the image, independently of the observed scene. The relevant 3D information is then extracted from the deformed second plane of light in each of the two images. The whole sensing device is thus composed of two laser profilometers, one passive stereo pair and two modules for simultaneously capturing retro-reflective targets. This preferred configuration is compact.

For a hand-held device, the baseline D**1** will be typically around 200 mm for submillimeter accuracy at a standoff distance of 300 to 400 mm between the sensing device and the object. By scaling D**1**, distances D**2** automatically follow. Although this arrangement is particularly useful for simplifying the discrimination between the 2D positioning features and the projected laser pattern in the images, integrating a stereo pair and eventually one or more additional cameras for a better discrimination and accuracy, makes it possible to process images where a different laser pattern is projected. Grids and circular patterns are relevant examples. Another possibility is to increase or decrease D**3** for more or less accuracy while losing the advantage of simplified image processing. While a linear configuration (i.e. D**3**=0) would not provide all the advantages of the above described configuration, it is still one option.

**62**. One can see the formerly described compact triangular architecture comprising two cameras with objectives **46** and a crosshair laser pattern projector **42**. The sensing device captures the image of the projected pattern **58** including a set of positioning features **60**.

While illustrated in the block diagrams as groups of discrete components communicating with each other via distinct data signal connections, it will be understood by those skilled in the art that the preferred embodiments are provided by a combination of hardware and software components, with some components being implemented by a given function or operation of a hardware or software system, and many of the data paths illustrated being implemented by data communication within a computer application or operating system. The structure illustrated is thus provided for efficiency of teaching the present preferred embodiment.

One skilled in the art should understand that the positioning features, described herein as retro-reflective targets, could alternatively be provided by light sources, such as LEDs, disposed on the surface of the object to be scanned or elsewhere, or by any other means that provide targets to be detected by the sensing device. Additionally, the light sources provided on the sensing device could be omitted if the positioning features themselves provide the light to be detected by the cameras.

It should be understood that the pattern projector hereinabove described as comprising a laser light source could also use a LED source or any other appropriate light source.

It will be understood that numerous modifications thereto will appear to those skilled in the art. Accordingly, the above description and accompanying drawings should be taken as illustrative of the invention and not in a limiting sense. It will further be understood that it is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains and as may be applied to the essential features herein before set forth, and as follows in the scope of the appended claims.

Patent Citations

Cited Patent | Filing date | Publication date | Applicant | Title |
---|---|---|---|---|

US4645348 | Sep 1, 1983 | Feb 24, 1987 | Perceptron, Inc. | Sensor-illumination system for use in three-dimensional measurement of objects and assemblies of objects |

US5410141 | Jun 7, 1990 | Apr 25, 1995 | Norand | Hand-held data capture system with interchangable modules |

US5661667 | Jan 4, 1995 | Aug 26, 1997 | Virtek Vision Corp. | 3D imaging using a laser projector |

US6101455 | May 14, 1998 | Aug 8, 2000 | Davis; Michael S. | Automatic calibration of cameras and structured light sources |

US6246468 | Oct 23, 1998 | Jun 12, 2001 | Cyra Technologies | Integrated system for quickly and accurately imaging and modeling three-dimensional objects |

US6508403 | May 4, 2001 | Jan 21, 2003 | Institut National D'optique | Portable apparatus for 3-dimensional scanning |

US6542249 | Jul 20, 1999 | Apr 1, 2003 | The University Of Western Ontario | Three-dimensional measurement method and apparatus |

US7487063 | Jun 11, 2004 | Feb 3, 2009 | UNIVERSITé LAVAL | Three-dimensional modeling from arbitrary three-dimensional curves |

US7912673 * | Mar 13, 2006 | Mar 22, 2011 | Creaform Inc. | Auto-referenced system and apparatus for three-dimensional scanning |

US20020041282 | Aug 3, 2001 | Apr 11, 2002 | Ricoh Company, Ltd. | Shape measurement system |

US20080201101 | Mar 13, 2006 | Aug 21, 2008 | Creaform Inc. | Auto-Referenced System and Apparatus for Three-Dimensional Scanning |

DE19502459A1 | Jan 28, 1995 | Aug 1, 1996 | Wolf Henning | Three dimensional optical measurement of surface of objects |

DE19634254A1 | Aug 24, 1996 | Mar 6, 1997 | Volkswagen Ag | Optical-numerical determination of entire surface of solid object e.g. for motor vehicle mfr. |

DE19925462C1 | Jun 2, 1999 | Feb 15, 2001 | Daimler Chrysler Ag | Method and system for measuring and testing a 3D body during its manufacture has a measuring system with an optical 3D sensor, a data processor and a testing system storing 3D theoretical data records of a 3D body's surface. |

JP2001119722A | Title not available | |||

JPH0712534A | Title not available | |||

JPH04172213A | Title not available | |||

JPH11101623A | Title not available | |||

WO2001014830A1 | Aug 18, 2000 | Mar 1, 2001 | Perceptron, Inc. | Method and apparatus for calibrating a non-contact gauging sensor with respect to an external coordinate system |

WO2001069172A1 | Mar 9, 2001 | Sep 20, 2001 | Perceptron, Inc. | A non-contact measurement device |

WO2003062744A1 | Jan 16, 2002 | Jul 31, 2003 | Faro Technologies, Inc. | Laser-based coordinate measuring device and laser-based method for measuring coordinates |

Non-Patent Citations

Reference | ||
---|---|---|

1 | C. Reich et al., "3-D shape measurement of complex objects by combining photogrammetry and fringe projection", Optical Engineering, Society of Photo-Optical Instrumentation Engineers, vol. 39 (1) Jan. 2000, pp. 224-231, USA. | |

2 | E. Trucco et al., Introductory techniques for 3-D computer vision, Prentice Hall, 1998, p. 101-108. | |

3 | F. Blais et al., "Accurate 3D Acquisition of Freely Moving Objects", in proc. of the Second International Symposium on 3D Date Processing, Visualization and Transmission, Sep. 6-9, 2004, NRC 47141, Thessaloniki, Greece. | |

4 | F. Blais, "A Review of 20 Years of Range Sensor Development", in proceedings of SPIE-IS&T Electronic Imaging, SPIE vol. 5013, 2003, pp. 62-76. | |

5 | GOM mbH: Products-Tritop-3d-Coordinate Measurement Technique using Photogrammetry, description available at http://www.gom-online.de/En/Products/tritop.html, Mar. 8, 2006. | |

6 | GOM mbH: Products—Tritop—3d-Coordinate Measurement Technique using Photogrammetry, description available at http://www.gom-online.de/En/Products/tritop.html, Mar. 8, 2006. | |

7 | J.Y. Bouguet et al., 3D Photography Using Shadows in Dual-Space Geometry, Int. Journal of Computer Vision, vol. 35, No. 2, Nov.-Dec. 1999, pp. 129-149. | |

8 | M. Fischler et al., "Random sample consensus : A paradigm for model fitting with applications to image analysis and automated cartography", Communications of the Assoc. for Computing Machinery, Jun. 1981, vol. 24, No. 6, pp. 381-395. | |

9 | M.W. Walker et al., "Estimating 3-D location parameters using dual numbers quaternions", CVGIP : Image Understanding, Nov. 1991, vol. 54, No. 3, pp. 358-367. | |

10 | P. Hebert, "A Self-Referenced Hand-Held Range Sensor", in proc. of the 3rd International Conference on 3D Digital Imaging and Modeling (3DIM 2001), May 28-Jun. 1, 2001, Quebec City, Canada, pp. 5-12). | |

11 | Pappa et al., "Dot-Projection Photogrammetry and Videogrammetry of Gossamer Space Structures", NASA/TM-2003-212146, Jan. 2003, Langley Research Center, Hampton, Virginia 23681-2199. | |

12 | S. Rusinkiewicz et al., "Real-Time 3D Model Acquisition", in ACM Transactions on Graphics, vol. 21, No. 3, Jul. 2002, pp. 438-446. |

Referenced by

Citing Patent | Filing date | Publication date | Applicant | Title |
---|---|---|---|---|

US8350893 * | Nov 23, 2010 | Jan 8, 2013 | Sony Corporation | Three-dimensional imaging apparatus and a method of generating a three-dimensional image of an object |

US8836766 | Nov 2, 2012 | Sep 16, 2014 | Creaform Inc. | Method and system for alignment of a pattern on a spatial coded slide image |

US9325974 | Jun 7, 2012 | Apr 26, 2016 | Creaform Inc. | Sensor positioning for 3D scanning |

US20110141243 * | Nov 23, 2010 | Jun 16, 2011 | Sony Corporation | Three-dimensional imaging apparatus and a method of generating a three-dimensional image of an object |

USD667010 * | Sep 6, 2011 | Sep 11, 2012 | Firth David G | Handheld scanner |

Classifications

U.S. Classification | 702/153, 356/603, 382/285 |

International Classification | G01C9/00, G06K9/36, G01B11/30 |

Cooperative Classification | G01B11/2513, G01B11/245, G06T7/521 |

European Classification | G01B11/245, G01B11/25D, G06T7/00R3 |

Legal Events

Date | Code | Event | Description |
---|---|---|---|

Jan 31, 2011 | AS | Assignment | Owner name: CREAFORM INC., CANADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HEBERT, PATRICK;SAINT-PIERRE, ERIC;TUBIC, DRAGAN;REEL/FRAME:025720/0316 Effective date: 20101130 |

Mar 25, 2015 | FPAY | Fee payment | Year of fee payment: 4 |

Rotate