|Publication number||USRE41175 E1|
|Application number||US 11/480,248|
|Publication date||Mar 30, 2010|
|Filing date||Jun 30, 2006|
|Priority date||Jan 22, 2002|
|Also published as||US6759979, US20030137449, WO2003062849A2, WO2003062849A3|
|Publication number||11480248, 480248, US RE41175 E1, US RE41175E1, US-E1-RE41175, USRE41175 E1, USRE41175E1|
|Inventors||Robert M. Vashisth, James U. Jensen, James W. Bunger|
|Original Assignee||Intelisum, Inc.|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (25), Non-Patent Citations (137), Referenced by (25), Classifications (22), Legal Events (4)|
|External Links: USPTO, USPTO Assignment, Espacenet|
The applicationNotice: More than one reissue application has been filed for the reissue of Pat. No. 6,759,979. The reissue applications are application No. 11/480,248 (the present application filed on Jun. 30, 2006 ) and application No. 12/362,954 (filed on Jan. 30, 2009 ), all of which are reissue or divisional reissue applications of Pat. No. 6,759,979. The present application is a reissue application of U.S. Pat. No. 6,759,979 (application No. 10/348,275 ), which claims the benefit of U.S. Provisional Application No. 60/350,860, filed on Jan. 22, 2002, for “System and Method for Generating 3-D Topographical Visualizations,” with inventors Munish Vashisth and James U. Jensen, whicheach application isidentified above being incorporated herein by this reference in its entirety.
1. Field of the Invention
The present invention relates generally to three-dimensional modeling. More specifically, the present invention relates to a system and method for capturing three-dimensional virtual models of a site that can be co-registered and visualized within a computer system.
2. Description of Related Background Art
Lidar (light detection and ranging) uses laser technology to make precise distance measurements over long or short distances. One application of lidar is the range scanner, or scanning lidar. In a typical range scanner, a lidar is mounted on a tripod equipped with a servo mechanism that continuously pans and tilts the lidar to scan a three-dimensional area. During the scanning process, the lidar makes repeated range measurements to objects in its path. The resulting range data may be collected and serve as a rough model of the scanned area.
Physical limitations of the range scanner constrain the maximum resolution of the range data, which decreases with distance from the range scanner. At large distances, the range scanner may not be able to discern surface details of an object. A lack of continuous spatial data (gaps between points) and a lack of color attributes are significant limitations of conventional range scanners. Furthermore, a range scanner only scans objects within the lidar's line-of-sight. As a result, no data is collected for the side of an object opposite to the lidar or for objects obscured by other objects (“occlusions”).
To obtain a more complete and accurate model, the range scanner can be moved to other scanning locations in order to scan the same area from different perspectives and thereby obtain range data for obscured objects. Thereafter, the resulting sets of range data can be merged into a single model.
Unfortunately, the merging of sets of range data is not automatic. Human decision-making is generally required at several steps in the merging process. For instance, a human surveyor is typically needed to determine the relative distances between the range scanning locations and the scanned area. Furthermore, a human operator must manually identify points in common (“fiducials”) between multiple sets of range data in order to align and merge the sets into a single model. Such identification is by no means easy, particularly in the case of curved surfaces. The need for human decision-making increases the cost of modeling and the likelihood of error in the process.
A system for capturing a virtual model of a site includes a range scanner for scanning the site to generate range data indicating distances from the range scanner to real-world objects. The system also includes a global positioning system (GPS) receiver coupled to the range scanner for acquiring GPS data for the range scanner at a scanning location. In addition, the system includes a communication interface for outputting a virtual model comprising the range data and the GPS data.
The system may further include a transformation module for using the GPS data with orientation information, such as bearing, for the range scanner to automatically transform the range data from a scanning coordinate system to a modeling coordinate system, where the modeling coordinate system is independent of the scanning location. A co-registration module may then combine the transformed range data with a second set of transformed range data for the same site generated at a second scanning location.
The system also includes a digital camera coupled to the range scanner for obtaining digital images of the real-world objects scanned by the range scanner. The system may associate the digital images of the real-world objects with the corresponding range data in the virtual model.
A system for building a virtual model of a site includes a communication interface for receiving a first set of range data indicating distances from a range scanner at a first location to real-world objects. The communication interface also receives a first set of GPS data for the range scanner at the first location. The system further includes a transformation module for using the first set of GPS data with orientation information for the range scanner to automatically transform the first set of range data from a first local coordinate system to a modeling coordinate system.
A system for modeling an object includes a range scanner for scanning an object from a first vantage point to generate a first range image. The system further includes a GPS receiver for obtaining GPS readings for the first vantage point, as well as a storage medium for associating the first range image and the GPS readings within a first virtual model.
The range scanner may re-scan the object from a second vantage point to generate a second range image. Likewise, the GPS receiver may acquire updated GPS readings for the second vantage point, after which the storage medium associates the second range image and the updated GPS readings within a second virtual model. A transformation module then employs the GPS readings of the virtual models with orientation information for the range scanner at each location to automatically transform the associated range images from local coordinate systems referenced to the vantage points to a single coordinate system independent of the vantage points.
Non-exhaustive embodiments of the invention are described with reference to the figures, in which:
Reference is now made to the figures in which like reference numerals refer to like elements. For clarity, the first digit of a reference numeral indicates the figure number in which the corresponding element is first used.
In the following description, numerous specific details of programming, software modules, user selections, network transactions, database queries, database structures, etc., are provided for a thorough understanding of the embodiments of the invention. However, those skilled in the art will recognize that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In some cases, well-known structures, materials, or operations are not shown or not described in detail to avoid obscuring aspects of the invention. Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
The location and dimensions of the site 104 may be defined by an operator 105 using a control device, such as a personal data assistant (PDA) 106, computer 108, or the like, which may communicate with the range scanner 102 using any wired or wireless method. The operator 105 may specify, for instance, the degree to which the range scanner 102 pans and tilts during scanning, effectively determining the dimensions of the site 104.
In one embodiment, the range scanner 102 is equipped with a high-resolution, high-speed digital camera 110 for obtaining digital images of the site 104 during the scanning process. As explained more fully below, the digital images may be later used to apply textures to a polygon mesh created from the range data, providing a highly realistic three-dimensional visualization 112 of the site 104 for display on a computer monitor 114 or other display device.
The range scanner 102 also includes a global positioning system (GPS) receiver 116 for acquiring GPS data relative to the range scanner 102 at the location of scanning. The GPS data may include, for example, the latitude, longitude, and altitude of the range scanner 102. In other embodiments, the GPS data may include Universal Transverse Mercator (UTM) coordinates, Earth-Centered/Earth-Fixed (ECEF) coordinates, or other Earth-based locators. A GPS receiver 116 relies on three or more orbiting satellites 118 for triangulation and, in some configurations, can provide readings accurate to within a few centimeters.
In one embodiment, the range scanner 102 sends the range data, digital images, and GPS data to a computer 108, where they are used to create the visualization 112. The visualization 112 may be interactive, e.g., a user may “walk through” the site 104 depicted in the visualization 112. In addition, the user may delete or move objects depicted in the visualization 112 or modify the visualization 112 in other ways. Such visualizations 112 are highly beneficial in the fields of architecture, landscape design, land use, erosion control, etc.
The digital camera 110 may include a PowerShot G2™ camera available from Canon, Inc. In one configuration, the digital camera 110 is capable of capturing images with a resolution of 2272×1704 pixels at a rate of approximately 2.5 images per second. The digital camera 110 may be included within, attached to, or otherwise integrated with the range scanner 102. In alternative embodiments, the range scanner 102 includes multiple digital cameras 110.
The GPS receiver 116 may be embodied as a standard mapping-grade receiver, which may support L-band differential GPS (DGPS). Where higher accuracy is needed, survey-grade receivers may be used, such as a carrier phase (CPH) or real-time kinematic (RTK) GPS. In such embodiments, a base station (not shown) having a known Earth location broadcasts an error correction signal that is used by the GPS receiver 116 to achieve accuracy to within a few centimeters. An example of a suitable GPS receiver 116 is the ProMark2™ survey system available from Ashtech, Inc. of Santa Clara, Calif. Like the digital camera 110, the GPS receiver 116 may be included within, attached to, or otherwise integrated with the range scanner 102.
The range scanner 102 may also include one or more orientation indicator(s) 202 for providing information about the orientation of the range scanner 102 with respect to the Earth. For example, one indicator 202 may provide a bearing or heading (azimuth) of the range scanner 102. Azimuth is typically expressed as a horizontal angle of the observer's bearing, measured clockwise from a referent direction, such as North. A bearing indicator 202 may be embodied, for instance, as a high-accuracy compass capable of digital output.
Some GPS receivers 116 may include compasses, gyroscopes, inertial navigation systems, etc., for providing highly accurate bearing and/or other orientation information. For example, the ProMark2™ survey system described above provides an azimuth reading. Similarly, a bearing may be obtained indirectly from GPS readings, since two precise GPS coordinates define a bearing. Thus, the orientation indicator 202 need not be separate component.
In certain implementations, an indicator 202 may provide the tilt or inclination of the range scanner 102 with respect to the Earth's surface. For example, the range scanner 102 may be tilted with respect to one or two axes. For simplicity, however, the following exemplary embodiments assume that the range scanner 102 is level prior to scanning.
As depicted, the range scanner 102 further includes a servo 203 for continuously changing the bearing and/or tilt of the range scanner 102 to scan a selected site 104. The servo 203 may include high-accuracy theodolite-type optical or electronic encoders to facilitate high-resolution scanning.
In one embodiment, the servo 203 only tilts the range scanner 102, while a continuously rotating prism or mirror performs the panning or rotation function. Alternatively, the range scanner 102 could be mounted at a 90° angle, in which case the servo 203 is used for panning. Thus, any appropriate mechanical and/or electronic means, such as stepper motors, diode arrays, etc., may be used to control the bearing and/or tilt of the range scanner 102 within the scope of the invention.
In one embodiment, the servo 203, as well as the other components of the range scanner 102, are directed by a controller 204. The controller 204 may be embodied as a microprocessor, microcontroller, digital signal processor (DSP), or other control device known in the art.
The controller 204 is coupled to a memory 206, such as a random access memory (RAM), read-only memory (ROM), or the like. In one configuration, the memory 206 is used to buffer the range data, digital images, and GPS data during the scanning process. The memory device 206 may also be used to store parameters and program code for operation of the range scanner 102.
In addition, the controller 204 is coupled to a control interface 208, such as an infrared (IR) receiver, for receiving IR-encoded commands from the PDA 106. Various other control interfaces 208 may be used, however, such as an 802.11b interface, an RS-232 interface, a universal serial bus (USB) interface, or the like. As previously noted, the PDA 106 is used to program the range scanner 102. For example, the PDA 106 may specify the size of the site 104 to be scanned, the resolution of the range data and digital images to be collected, etc.
The controller 204 is also coupled to a communication interface 210 for sending the captured range data, digital images, and GPS data to the computer 108 for further processing. The communication interface 210 may include, for instance, an Ethernet adapter, a IEEE 1349 (Firewire) adaptor, a USB adaptor, or other high-speed communication interface.
The communication interface 210 of the range scanner 102 is coupled to, or in communication with, a similar communication interface 212 within the computer 108. The computer 108 may be embodied as a standard IBM-PC™ compatible computer running a widely-available operating system (OS) such as Windows XP™ or Linux™.
The computer 108 also includes a central processing unit (CPU) 214, such as an Intel™ x86 processor. The CPU 214 is coupled to a standard display interface 216 for displaying text and graphics, including the visualization 112, on the monitor 114. The CPU 214 is further coupled to an input interface 218 for receiving data from a standard input device, such as a keyboard 220 or mouse 222.
The CPU 214 is coupled to a memory 224, such as a RAM, ROM, or the like. As described in greater detail hereafter, the memory 224 includes various software modules or components, including a co-registration module 228, transformation module 229, a merging module 230, and a visualization module 232. The memory 224 may further include various data structures, such as a number of virtual models 234.
Briefly, the co-registration module 228 automatically co-registers sets of range data from different views (e.g., collected from different vantage points) using the GPS data and orientation information. Co-registration places the sets of range data 302 within the same coordinate system and combining the sets into a single virtual model 234. In addition, co-registration may require specific calibration of instruments for parallax and other idiosyncrasies.
The transformation module 229 performs the necessary transformations to convert each set of range data from a local scanning coordinate system referenced to a particular scanning location to a modeling coordinate system that is independent of the scanning location. Since transformation is typically part of co-registration, the transformation module 229 may be embodied as a component of the co-registration module 228 in one embodiment.
The merging module 230 analyzes the range data 302 to correct for errors in the scanning process, eliminating gaps, overlapping points, and other incongruities. Thereafter, the visualization module 232 produces the interactive, three-dimensional visualization 112, as explained in greater detail below.
In alternative embodiments, one or more of the described modules may be implemented using hardware or firmware, and may even reside within the range scanner 102. Thus, the invention should not be construed as requiring a separate computer 108.
In one configuration, the computer 108 includes a mass storage device 236, such as a hard disk drive, optical storage device (e.g., DVD-RW), or the like, which may be used to store any of the above-described modules or data structures. Hence, references herein to “memory” or “storage media” should be construed to include any combination of volatile, non-volatile, magnetic, or optical storage media.
The pattern of marks depicted within the range data 302 represents sample points, i.e., points at which a range measurement has been taken. The density or resolution of the range data 302 depends on the distance of the object from the range scanner 102, as well as the precision and accuracy of the lidar 103 and the mechanism for panning and/or tilting the lidar 103 relative to its platform. Although
As previously noted, the GPS receiver 116 associated with the range scanner 102 obtains GPS data 304 (e.g., latitude, longitude, altitude) relative to the range scanner 102 at the scanning position. Additionally, the orientation indicator(s) 202 may provide orientation information 305, e.g., bearing, tilt.
The camera 110 associated with the range scanner 102 obtains one or more high-resolution digital images 306 of the site 104. The resolution of the digital images 306 will typically far exceed the resolution of the range data 302.
The range data 302, GPS data 304, orientation information 305, and digital images 306 are collected at each scanning position or location and represent a virtual model 234 of the site 104. Separate virtual models 234 are generated from the perspective of each of the scanning positions. Of course, any number of virtual models 234 of the site 104 can be made within the scope of the invention.
In certain instances, a data structure lacking one or more of the above-described elements may still be referred to as a “virtual model.” For example, a virtual model 234 may not include the digital images 306 or certain orientation information 305 (such as tilt data where the range scanner 102 is level during scanning).
In general, each of the sets of range data 302a-c have separate scanning coordinate systems 402a-c that are referenced to the scanning positions. Typically, the range data 302 is initially captured in a polar (or polar-like) coordinate system. For example, as shown in
Converting polar range-data 304 into the depicted Cartesian coordinates may be done using standard transformations, as shown below.
X=R cos φ cosθ Eq. 1
Y=R sin φ Eq. 2
Z=R cos φsin θ Eq. 3
In certain embodiments, the geometry of the range scanner 102 (e.g., the axis of rotation, offset, etc.) may result in a polar-like coordinate system that requires different transformations, as will be known to those of skill in the art. In general, the origin of each of the scanning coordinate systems 402a-c is the light-reception point of the lidar 103.
In one embodiment, the modeling coordinate system 602 is based on a geographic coordinate system, such as Universal Transverse Mercator (UTM), Earth-Centered/Earth-Fixed (ECEF), or longitude/latitude/altitude (LLA). GPS receivers 104 are typically able to display Earth-location information in one or more of these coordinate systems. UTM is used in the following examples because it provides convenient Cartesian coordinates in meters. In the following examples, the UTM zone is not shown since the range data 302 will typically be located within a single zone.
As depicted in
X1=X cos (b)−Z sin (b) Eq. 4
Z1=Z cos (b)+X sin (b) Eq. 5
These equations assume that the range scanner 102 was level at the time of scanning, such that the XZ planes of the scanning coordinate system 402 and modeling coordinate system 602 are essentially co-planer. If, however, the range scanner 102 was tilted with respect to the X and/or Z axes, the transformations could be modified by one of skill in the art.
Next, as shown in
X2=X1+GPSE Eq. 6
Y2=Y1+GPSH Eq. 7
Z2=Z1+GPSN Eq. 8
Those of skill in the art will recognize that the invention is not limited to UTM coordinates and that transformations exist for other coordinate systems, such as ECEF and LLA. In certain embodiments, the modeling coordinate system 602 may actually be referenced to a local landmark or a point closer to the range data 302, but will still be geographically oriented.
In the preceding example, the units of the range data 302 and GPS data 304 are both in meters. For embodiments in which the units differ, a scaling transformation will be needed. Furthermore, while
When the transformation is complete, the co-registration module 228 co-registers or combine combines the range data 302a-c from the various views into a co-registered model 702 of the entire site 104. This may involve, for example, combining the sets of range data 302a-c into a single data structure, while still preserving the ability to access the individual sets.
In one embodiment, the co-registered model 702 includes GPS data 304 for at least one point. This allows the origin of the modeling coordinate system 602 to be changed to any convenient location, while still preserving a geographic reference.
As illustrated in
In one embodiment, the merging module 230 incorporates the Scanalyze™ product available from Stanford University. Scanalyze™ is an interactive computer graphics application for viewing, editing, aligning, and merging range images to produce dense polygon meshes.
Scanalyze™ processes three kinds of files: triangle-mesh PLY files (extension .ply), range-grid PLY files (also with extension .ply), and SD files (extension .sd). Triangle-mesh PLY files encode general triangle meshes as lists of arbitrarily connected 3D vertices, whereas range-grid PLY files and SD files encode range images as rectangular arrays of points. SD files also contain metadata that describe the geometry of the range scanner 102 used to acquire the data. This geometry is used by Scanalyze™ to derive line-of-sight information for various algorithms. PLY files may also encode range images (in polygon mesh form), but they do not include metadata about the range scanner and thus do not provide line-of-sight information.
Once the PLY or SD files have been loaded, they can be pairwise aligned using a variety of techniques—some manual (i.e. pointing and clicking) and some automatic (using a variant of the ICP algorithm).
Pairs of scans can be selected for alignment either automatically (so-called all-pairs alignment) or manually, by choosing two scans from a list. These pairwise alignments can optionally be followed by a global registration step whose purpose is to spread the alignment error evenly across the available pairs. The new positions and orientations of each PLY or SD file can be stored as a transform file (extension .xf) containing a 4×4 matrix.
The visualization module 232 also decomposes the digital images 306 into textures 904, which are then applied to the polygon mesh 902. In essence, the digital images 306 are “draped” upon the polygon mesh 902. Due to the relatively higher resolution of the digital images 306, the textures 904 add a high degree of realism to the visualization 112. Techniques and code for applying textures 904 to polygon meshes 902 are known to those of skill in the art.
In one embodiment, the mesh 902 and textures 904 are used to create the visualization 112 of the site 104 using a standard modeling representation, such as the virtual reality modeling language (VRML). Thereafter, the visualization 112 can be viewed using a standard VRML browser, or a browser equipped with a VRML plugin, such as the Microsoft™ VRML Viewer. Of course, the visualization 112 could also be created using a proprietary representation and viewed using a proprietary viewer.
As depicted in
After then range scanner 102 is moved to a second location, the method 1000 continues by scanning 1008 the site 104 to generate a second set of range data 302 indicating distances from the range scanner 102 at the second location to real-world objects in the site 104. In addition, the GPS receiver 116 acquires 1010 a second set of GPS data 304 relative to the range scanner 102 at the second location, after which the range scanner 102 outputs 1012 a second virtual model 234 comprising the second sets of range data 302 and GPS data 304.
In one configuration, a transformation module 229 then uses 1014 the sets of GPS data 304 to transform the sets of range data 302 from scanning coordinate systems 402 to a single modeling coordinate system 602. Thereafter, the transformed range data 302 can be merged and visualized using standard applications.
As illustrated in
The site models 1104a-b may be co-registered models 702 or merged models 802, as previously shown and described. Furthermore, as previously noted, a site model 1104a-b may include GPS data 304.
In one embodiment, the transformation module 229 uses the sets of GPS data 304a-b to combine the individual site models 1104a-b into a single area model 1106. This may be done in the same manner as the virtual models 302a-c of
The resulting area model 1106 may then be used to produce an interactive, three-dimensional visualization 112 of the entire area 1102 that may be used for many purposes. For example, a user may navigate from one site 104 to another within the area 1102. Also, when needed, a user may remove any of the site models 1104 from the area model 1106 to visualize the area 1102 within the objects from the removed site model 1104. This may be helpful in the context of architectural or land-use planning.
While specific embodiments and applications of the present invention have been illustrated and described, it is to be understood that the invention is not limited to the precise configuration and components disclosed herein. Various modifications, changes, and variations apparent to those skilled in the art may be made in the arrangement, operation, and details of the methods and systems of the present invention disclosed herein without departing from the spirit and scope of the invention.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US5337149||Nov 12, 1992||Aug 9, 1994||Kozah Ghassan F||Computerized three dimensional data acquisition apparatus and method|
|US5988862||Apr 24, 1996||Nov 23, 1999||Cyra Technologies, Inc.||Integrated system for quickly and accurately imaging and modeling three dimensional objects|
|US6166744||Sep 15, 1998||Dec 26, 2000||Pathfinder Systems, Inc.||System for combining virtual images with real-world scenes|
|US6246468||Oct 23, 1998||Jun 12, 2001||Cyra Technologies||Integrated system for quickly and accurately imaging and modeling three-dimensional objects|
|US6249600||Nov 26, 1997||Jun 19, 2001||The Trustees Of Columbia University In The City Of New York||System and method for generation of a three-dimensional solid model|
|US6292215||Jul 24, 1997||Sep 18, 2001||Transcenic L.L.C.||Apparatus for referencing and sorting images in a three-dimensional system|
|US6307556||Mar 27, 1995||Oct 23, 2001||Geovector Corp.||Augmented reality vision systems which derive image information from other vision system|
|US6330523||Oct 23, 1998||Dec 11, 2001||Cyra Technologies, Inc.||Integrated system for quickly and accurately imaging and modeling three-dimensional objects|
|US6420698||Oct 23, 1998||Jul 16, 2002||Cyra Technologies, Inc.||Integrated system for quickly and accurately imaging and modeling three-dimensional objects|
|US6473079||Oct 23, 1998||Oct 29, 2002||Cyra Technologies, Inc.||Integrated system for quickly and accurately imaging and modeling three-dimensional objects|
|US6526352||Jul 19, 2001||Feb 25, 2003||Intelligent Technologies International, Inc.||Method and arrangement for mapping a road|
|US6664529 *||Jan 15, 2002||Dec 16, 2003||Utah State University||3D multispectral lidar|
|US6759979||Jan 21, 2003||Jul 6, 2004||E-Businesscontrols Corp.||GPS-enhanced system and method for automatically capturing and co-registering virtual models of a site|
|US20010010546||Sep 26, 1997||Aug 2, 2001||Shenchang Eric Chen||Virtual reality camera|
|US20020060784||Jan 15, 2002||May 23, 2002||Utah State University||3D multispectral lidar|
|US20030090415||Oct 24, 2002||May 15, 2003||Mitsui & Co., Ltd.||GPS positioning system|
|US20040105573||Sep 30, 2003||Jun 3, 2004||Ulrich Neumann||Augmented virtual environments|
|US20050057745||Sep 14, 2004||Mar 17, 2005||Bontje Douglas A.||Measurement methods and apparatus|
|WO1997040342A2||Apr 24, 1997||Oct 30, 1997||Cyra Technologies, Inc.||Integrated system for imaging and modeling three-dimensional objects|
|WO2001004576A1||Jul 14, 2000||Jan 18, 2001||Cyra Technologies, Inc.||Method for operating a laser scanner|
|WO2001088565A2||Apr 27, 2001||Nov 22, 2001||Cyra Technologies, Inc.||Apparatus and method for identifying the points that lie on a surface of interest|
|WO2001088566A2||Apr 27, 2001||Nov 22, 2001||Cyra Technologies, Inc.||System and method for acquiring tie-point location information on a structure|
|WO2001088741A2||Apr 27, 2001||Nov 22, 2001||Cyra Technologies, Inc.||System and method for concurrently modeling any element of a model|
|WO2001088849A2||Apr 27, 2001||Nov 22, 2001||Cyra Technologies, Inc.||Apparatus and method for forming 2d views of a structure from 3d point data|
|WO2002016865A2||Aug 24, 2001||Feb 28, 2002||3Shape Aps||Object and method for calibration of a three-dimensional light scanner|
|1||""Automatic" Multimodal Medical Image Fusion," http://csdl2.computer.org/persagen/DLAbsToc.jsp, Jul. 7, 2006, pp. 1.|
|2||"2-D and 3-D Image Registration: A Tutorial," http://www.cs.wright.edu/~agoshtas/CVPR04_Registration_Tutorial.html, Jul. 7, 2006, pp. 1-3.|
|3||"2-D and 3-D Image Registration: A Tutorial," http://www.cs.wright.edu/˜agoshtas/CVPR04_Registration_Tutorial.html, Jul. 7, 2006, pp. 1-3.|
|4||"3D Modeling of Outdoor Environments by Integrating Omnidirectional Range and Color Images," Toshihiro ASAI et al., Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling, 2005, pp. 1-8.|
|5||"3D Reality Modelling: Photo-Realistic 3D Models of Real World Scenes," Vitor Sequeira, Joäo G.M. Gonçalves, Proceedings of the First International Symposium on 3D Data Processing Visualization and Transmission, 2002, pp. 1-8.|
|6||"A Contour-Based Approach to Multisensor Image Registration," Hui Li, B. S. Manjunath, Sanjit K. Mitra, IEEE Transactions On Image Processing, vol. 4, No. 3, Mar. 1995, pp. 320-334.|
|7||"A flexible mathematical model for matching of 3D surfaces and attributes," Devrim Akca, Armin Gruen, Electronic Imaging, SPIE vol. 5665, 2005, pp. 184-195.|
|8||"A Multi-Resolution ICP with Heuristic Closest Point Search for Fast and Robust 3D Registration of Range Images," Timothée Jost and Heinz Hügli, Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling, 2003, pp. 1-7.|
|9||"A Point-and-Shoot Color 3D Camera," Askold V. Strat, Manuel M. Oliveira, Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling, 2003, pp. 1-8.|
|10||"A simple MATLAB interface to FireWire cameras," F. Wörnle, May 2006, pp. 1-25.|
|11||"A Survey of Medical Image Registration," J. B. Antoine Maintz and Max A. Viergever, Image Sciences Institute, Utrecht University Hospital, Utrecht, the Netherlands, Oct. 16, 1997, pp. 1-37.|
|12||"Adaptive Enhancement of 3D Scenes using Hierarchical Registration of Texture-Mapped 3D models," Srikumar Ramalingam and Suresh K. Lodha, Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling, 2003, pp. 1-8.|
|13||"Advanced Nonrigid Registration Algorithms for Image Fusion," Simon K. Warfield et al., Brain Mapping: The Methods, Second Edition, 2002, pp. 661-690.|
|14||"Alignment by Maximization of Mutual Information," Paul A. Viola, Massachusetts Institute of Technology, 1995, pp. 1-156.|
|15||"An Integrated Multi-Sensory System for Photo-Realistic 3D Scene Reconstruction," Kia Ng et al., School of Computer Studies, University of Leeds, European Commission-Joint Research Centre, pp. 1-9.|
|16||"An Integrated Multi-Sensory System for Photo-Realistic 3D Scene Reconstruction," Kia Ng et al., School of Computer Studies, University of Leeds, European Commission—Joint Research Centre, pp. 1-9.|
|17||"Automated reconstruction of 3D models from real environments," V. Sequeira, K. Ng, E. Wolfart, J.G.M. Gonçalves, D. Hogg, ISPRS Journal of Photogrammetry & Remote Sensing 54, 1999, pp. 1-22.|
|18||"Automated Registration and Evaluation of Laser Scanner Point Clouds and Images, Automatic Point Cloud Registration Using Template Shaped Targets," http://www.photogrammetry.ethz.ch/research/pointcloud/withtargets.html, Aug. 25, 2004.|
|19||"Automated Registration, 3DD Optix 400 Series,"3D Digital Corp., Apr. 2004.|
|20||"Automatic Registration of Range Images Based on Correspondence of Complete Plane Patches," Wenfeng He, Wei Ma, Hongbin Zha, Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling, 2005, pp. 1-6.|
|21||"Combining texture and shape for automatic crude patch registration," Joris Vanden Wyngaerd et al., Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling, 2003, pp. 1-8.|
|22||"Consistent Linear-Elastic Transformations for Image Matching," Gary E. Christensen, A. Kuba et al. (Eds.): IPIM'99, LNCS 1613, 1999, pp. 224-237.|
|23||"Construction of Large-Scale Virtual Environment by Fusing Range Data, Texture Images, and Airborne Altimetry Data," Conny Riani Gunadi et al., Proceedings of the First International Symposium on 3-D Data Processing Visualization and Transmission, 2002 pp. 1.|
|24||"Construction of Large-Scale Virtual Environment by Fusing Range Data, Texture Images, and Airborne Altimetry Data," Conny Riani Gunadi et al., Proceedings of the First International Symposium on 3-D Data Processing Visualization and Transmission, 2002, pp. 1-4.|
|25||"Correction of color information of a 3D model using a range intensity image," Kazunori Umeda et al., Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling, 2005, pp. 1-8.|
|26||"Edge and Line Detection Image Fusion Systems Research," http://www.imgfsr.com/ifsr_is_ed.html, Jul. 7, 2006, pp. 1-8.|
|27||"Effective Nearest Neighbor Search for Aligning and Merging Range Images," Ryusuke Sagawa et al., Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling. 2003, pp. 1-8.|
|28||"Enhanced, Robust Genetic Algorithms for Multiview Range Image Registration," Luciano Silva et al., Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling, 2003, pp. 1-8.|
|29||"Evaluating Collinearity Constraint for Automatic Range Image Registration," Yonghuai Liu et al., Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling, 2005, pp. 1-8.|
|30||"Fast Alignment of 3D Geometrical Models and 2D Color Images using 2D Distance Maps," Yumi Iwashita et al., Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling, 2005, pp. 1-8.|
|31||"Fast Normalized Cross-Correlation," http://www.idiom.com/~zilla/Work/nvisionInterface/nip.htm, Jul. 7, 2006, pp. 1-11.|
|32||"Fast Normalized Cross-Correlation," http://www.idiom.com/˜zilla/Work/nvisionInterface/nip.htm, Jul. 7, 2006, pp. 1-11.|
|33||"Feature point detection in multiframe images," Barbara Zitova et al., Czech Pattern Recognition Workshop 2000, Feb. 2, 2000-Feb. 4, 2000, pp. 1-6.|
|34||"Fully automatic registration of multiple 3D data sets," Daniel F. Huber, Martial Hebert, Image and Vision Computing 21, 2003, pp. 637-650.|
|35||"Fusion and interpretation of medical images," http://www.research.ibm.com/hc/VISUALIZE/visualize.html, Jul. 7, 2006, pp. 1-3.|
|36||"Future standard," http://www.medicalimagingmag.com/issues/articles/2000-08_01.asp, Jul. 7, 2006, pp. 1.|
|37||"Gaussian Random Fields on Sub-Manifolds for Characterizing Brain Surfaces," http://portal.acm.org/citation.cfm?id=645595.660540&coll=GUIDE&d..., Jul. 6, 2006, pp. 1-3.|
|38||"HLODs: Hierarchical Levels of Detail, hierarchical Simplification for Faster Display of Massive Geometric environments," Department of Computer Science, University of North Carolina at Chapel Hill, Feb. 2004.|
|39||"Image Fusion Systems Research," http://www.imgfsr.com/, Jul. 7, 2006, pp. 1-2.|
|40||"Image Fusion-a new era in diagnosis," Dr Michael Kitchener, Imaging Update, Issue 8, Apr. 2002, pp. 1-4.|
|41||"Image Fusion—a new era in diagnosis," Dr Michael Kitchener, Imaging Update, Issue 8, Apr. 2002, pp. 1-4.|
|42||"Image Guidance Laboratories," Complete Laboratory Publications, http://www-igl.stanford.edu/papers.php?pg=main..., Jul. 7, 2006.|
|43||"Image Image-Guided Interventions Workshop Guided Interventions Workshop," John Haller, National Institute of Biomedical Imaging and Bioengineering, May 13, 2004-May 14, 2004, pp. 2-11.|
|44||"Image Registration and Mosaicking," http://prettyview.com/mtch/mtch.shtml, Jul. 7, 2006, pp. 1.|
|45||"Image registration methods: a survey," Barbara Zitova et al., Department of Image Processing, Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Jun. 26, 2003, pp. 977-1000.|
|46||"Image-Based Object Editing," Holly Rushmeier et al., Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling, 2003, pp. 1-8.|
|47||"Image-Based Registration of 3D-Range Data Using Feature Surface Elements," Gerhard Heinrich Bendels et al., Institute for Computer Science II-Computer Graphics, University of Bonn, Germany, 2004, pp. 1-10.|
|48||"Image-Based Registration of 3D-Range Data Using Feature Surface Elements," Gerhard Heinrich Bendels et al., Institute for Computer Science II—Computer Graphics, University of Bonn, Germany, 2004, pp. 1-10.|
|49||"Infrastructure for Image Guided Surgery," John W. Haller, Timothy C. Ryken, Thomas A. Gallagher and Michael W. Vannier, National Institute of Neurological Disease and Stroke, Jun. 28, 2001, pp. 1-6.|
|50||"Inverse Problems: Image Restoration and Parameter Identification," http://www.mit.jyu.fi/majkir/tutkimus/index.html, Tommi Kärkkäinen and Kirsi Majava, University of Jyväskylä Dept. of Mathematical Information Technology, Aug. 2, 2004, pp. 1-3.|
|51||"Investigations of Image Fusion," http://www.ece.lehigh.edu/SPCRL/IF/image_fusion.htm, Jul. 7, 2006, pp. 1-13.|
|52||"iPhotoMeasure-The Contractors Photo Measuring Tool," http://www.iphotomeasure.com/faq.asp, Jan. 25, 2007.|
|53||"iPhotoMeasure—The Contractors Photo Measuring Tool," http://www.iphotomeasure.com/faq.asp, Jan. 25, 2007.|
|54||"Keith Price Bibliography Fusion of Medical Data," http://iris.usc.edu/Vision-Notes/bibliography/match-p1504.html, Jul. 7, 2006.|
|55||"Least Squares 3D Surfaces Matching," Armin Gruen, Devrim Akca, Geospatial Goes Global: From Your Neighborhood to the Whole Planet, ASPRS 2005 Annual Conference, Mar. 7, 2005-Mar. 11, 2005, pp. 1-13.|
|56||"Lecture Notes In Computer Science," http://portal.acm.org/toc.cfm?id=645595&type=proceeding&coll=GUI..., Jul. 6, 2006, pp. 1-6.|
|57||"Leica Cyclone 5.6 Register," Leica Geosystems, Heerbrugg, Switzerland, 2006.|
|58||"Medical image fusion by wavelet transform modulus maxima," Guihong Qu, Dali Zhang and Pingfan Yan, Department of Automation, Tsinghua University, Beijing 100084,China, May 10, 2001, pp. 184-190.|
|59||"Medical Image Fusion," http://www.uihealthcare.com/news/currents/vol1issue1/figimfus.html, Jul. 6, 2006, pp. 1-4.|
|60||"Medical Image Processing," http://www.sce.carleton.ca/faculty/adler/elg7173/elg7173.html, Jul. 7, 2006, pp. 1-5.|
|61||"Multi-modality image registration using mutual information based on gradient vector flow," Yujun Guo, May 1, 2006, pp. 1-31.|
|62||"Multi-Sensor Image Fusion Using the Wavelet Transform," Hui Li, B.S. Manjunath, Sanjit K. Mitra, IEEE, 1994, pp. 51-55.|
|63||"Non-parametric 3D Surface Completion," Toby P. Breckon, Robert B. Fisher, Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling, 2005, pp. 1-8.|
|64||"Open Scene Graph Home Page," http://www.openscenegraph.org/osgwiki/pmwiki.php/Home/HomePage, Jan. 25, 2007.|
|65||"Project: Fully Automated Registration and Composite Generation of Multisensor and Multidate Satellite Image Data," http://vision.ece.ucsb.edu/registration/satellite/, Jul. 7, 2006, pp. 1-2.|
|66||"Projective Surface Matching of Colored 3D Scans," Kari Pulli et al., Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling, 2005, pp. 1-8.|
|67||"Registration and Fusion of Intensity and Range Data for 3D Modelling of Real World Scenes," Paulo Dias et al., Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling, 2003, pp. 1-8.|
|68||"Registration and Integration of Textured 3-D Data," Andrew Johnson and Sing Bing Kang, Digital Equipment Corporation Cambridge Research Lab, Sep. 1996, pp. 1-48.|
|69||"Reliability of Functional MRI for Motor and Language Cortex Activation," http://dolphin.radiology.uiowa,edu/ge/BME/public_html/projects/project, Jul. 6, 2006, pp. 1.|
|70||"Robust Detection of Significant Points in Multiframe Images," http://staff.utia.cas.cz/zitova/corners.htm, Jul. 7, 2006, pp. 1-5.|
|71||"Selected Papers on Image Registration, Image Fusion Systems Research," http://www.imgfsr.com/ifsr_irb.html, Jul. 7, 2006.|
|72||"Semi-automatic range to range registration: a feature-based method," Chen Chao et al., Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling, 2005, pp. 1-8.|
|73||"The Registration of Three Dimensional Images from one or more Imaging Modalities," http://www.biomed.abdn.ac.uk/Abstracts/A00331/, Jul. 7, 2006, pp. 1-11.|
|74||"Workshop on symmetries, inverse problems and image processing," http://www.indmath.uni-linz.ac.at/people/bila/Workshop.html, Jan. 12, 2005, pp. 1-4.|
|75||*||A Volumetric Method for Building Complex Models from Range Images by Brian Curless et al.; Proc. SIGGRAPH '96, Aug. 1995; pp. 1-10.|
|76||*||Allen, P. et al, Avenue: automated site modeling in urban environments-3-D Digital Imaging and Modeling, 2001. Proceedings. Third International Conference on, 2001, pp. 357-364.|
|77||*||Allen, P. et al, Avenue: automated site modeling in urban environments—3-D Digital Imaging and Modeling, 2001. Proceedings. Third International Conference on, 2001, pp. 357-364.|
|78||Ameesh Makadia, et al., "Fully Automatic Registration of 3D Point Clouds," Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2006.|
|79||Andrew Johnson, et al. "Registration and Integration of Textured 3-D Data," Bibliography, 1996.|
|80||Armin Gruen, et al., "Least Squares 3D Surface and Curve Matching," Elsevier, ISPRS Journal of Photogrammetry & Remote Sensing 59, Feb. 16, 2005.|
|81||Ayman F. Habib, et al., "Automatic Surface Matching for the Registration of LIDAR Data and MR Imagery," ETRI Journal, vol. 28, No. 2, Apr. 2006.|
|82||B. Vangelder, et al., "Modern Technologies for Design Data Collection. " Civil Engineering, Joint Transportation Research Program, Purdue Libraries, 2005.|
|83||*||Beraldin, J-A. et al, "Portable Digital 3-d Imaging System For Remote Sites"-Circuits and Systems, 1998. ISCAS '98. Proceedings of the 1998 IEEE International Symposium on, pp. V488-V493.|
|84||*||Beraldin, J-A. et al, "Portable Digital 3-d Imaging System For Remote Sites"—Circuits and Systems, 1998. ISCAS '98. Proceedings of the 1998 IEEE International Symposium on, pp. V488-V493.|
|85||*||Berladin, J-A. et al, "Object Model Creation From Multiple Range Images: Acquisition, Calibration, Model Building And Verification," 3-D Digital Imaging and Modeling, 1997. Proceedings., International Conference on Recent Advances,pp. 326-333.|
|86||*||Besl, Paul, "Geometric modeling and computer vision"-Proceedings of the IEEE, vol. 76, No. 8, Aug. 1988, pp. 936-958.|
|87||*||Besl, Paul, "Geometric modeling and computer vision"—Proceedings of the IEEE, vol. 76, No. 8, Aug. 1988, pp. 936-958.|
|88||Christoph Dold, "Extended Gaussian Images for the Registration of Terrestrial Scan Data," Institute of Cartography and Geoinformatics, Workshop "Laser Scanning 2005," Sep. 1214, 2005.|
|89||Christoph Dold, et al., "Automatic Matching of Terrestrial Scan Data as a Basis for the Generation of Detailed 3D City Models," International Archives of Phtotogrammetry Remote Sensing and Spatial Information Sciences, vol. 35, Part 3, p. 1091-1096, 2004.|
|90||D. Akca, "Registration of Point Clouds Using Range and Intensity Information," International Workshop on Recording, Modeling and Visualization of Cultural Heritage, 2005.|
|91||David T. Gering, et al., An Integrated Visualization System for Surgical Planning and Guidance using Image Fusion and Interventional Imagining, MIT AI Laboratory, 1999.|
|92||Dinesh Manadhar, "Extraction of linear features from vehicle-borne laser data," Remote Sensing, Singapore, vol. 2, p. 113-1118, Nov. 5-9, 2001.|
|93||Faysal Boughorbal, et al., "Registration and Integration of Multi-Sensor Data for Photo-realistic Scene Reconstruction," Applied Imagery Pattern Recognition, Sponsored by SPIE, Washington, DC, Oct. 13-15, 1999.|
|94||Fiorella Sgallari, "Numerical Solution of Inverse Problems in Image Processing," University of Bologna, Presented at Association for Computing Machinery Conference, Jan. 13, 2006.|
|95||Geoff Jacobs, "Field Productivity Factors in Laser Scanning, Part 1," Professional Surveyor Magazine, Jan. 2007.|
|96||Google Search, "Medical Image Fusion," http://www.google.com/search?hl=cn&q=medical+image+fusion&btnG..., Jul. 7, 2006.|
|97||*||Gueorguiev, A. et al, "Design, architecture and control of a mobile site-modeling robot"-Robotics and Automation, 2000. Proceedings. ICRA '00. IEEE International Conference on, pp. 3266-3271.|
|98||*||Gueorguiev, A. et al, "Design, architecture and control of a mobile site-modeling robot"—Robotics and Automation, 2000. Proceedings. ICRA '00. IEEE International Conference on, pp. 3266-3271.|
|99||Hans J. Johnson, et al., "Consistent Landmark and Intensity-based Image Registration," IEEE Transactions on Medical Imaging, p. 126, 2002.|
|100||Helmut Pottmann, et al., "Registration without ICP," Geometric Modeling and Industrial Geometry Group, Vienna University of Technology, Mar. 5, 2004.|
|101||Hendrik P. A. Lensch, et al., "Automated Texture Registration and Stitching for Real World Models," Max-Planck-Institute for Computer Science, Baarbrucken, Germany, 2000.|
|102||Isi-Gi, Dec. 21, 2006.|
|103||*||Kamgar-Parsi et al, "Registration Algorithms For Geophysical Maps"-Oceans '97, MTS/IEEE Conference Proceedings, pp. 974-980.|
|104||*||Kamgar-Parsi et al, "Registration Algorithms For Geophysical Maps"—Oceans '97, MTS/IEEE Conference Proceedings, pp. 974-980.|
|105||Kia Ng, et al., "An Integrated Multi-Sensory System for photo-Realistic 3D Scene Reconstruction," ISPRS Comm, p. 356-363, 1998.|
|106||*||Klein, Konrad et al, "View Planning for the 3D Modelling of Real World Scenes," Proc. of the 2000 IEEE/RSJ International Co on Intelligent Robots and Systems, pp. 943-948.|
|107||*||Kropp,A. et al, OMNIVIS'00: "Acquiring and Rendering High-Resolution Spherical Mosaics", pp. 1-7.|
|108||Kwang-Ho Bae, et al., "Automated registration of Unorganised Point Clouds from Terrestrial Laser Scanners," Department of Spatial Sciences, Curtin University of Technology, Perth, Australia, 2004.|
|109||*||Li, Rongxing, "Mobile Mapping-An Emerging Technology For Spatial Data Acquisition," Dept. of Civil and Environmental Engineering and Geodetic Science, The Ohio State University, 2000, pp. 1-23.|
|110||*||Li, Rongxing, "Mobile Mapping—An Emerging Technology For Spatial Data Acquisition," Dept. of Civil and Environmental Engineering and Geodetic Science, The Ohio State University, 2000, pp. 1-23.|
|111||Matthew P. Tait, "Point Cloud registration: Current State of the Science," Schulich School of Engineering, University of Calgary, Mar. 27, 2006.|
|112||Mehdi Bouroumand, et al., "The Fusion of Laser Scanning and Close Range Photogrammetry in Bam Laser-Photogrammetric Mapping of Bam Citadel (Arg-E-Bam)/Iran," KNT University in Tehran, 2004.|
|113||*||Modeling and Rendering of Real Environments by Wagner T. Correa et al.; RITA, vol. IX, Numero 1, Aug. 2002; pp. 1-32.|
|114||Naser El-Sheimy, et al., "Digital Terrain Modeling, Acquisition, Manipulation, and Applications," Artech House, Inc., p. 117-119, 184-186, 2005.|
|115||Natasha Gelfand, et al., "Robust Global Registration," Eurographics Symposium on Geometry Processing, 2005.|
|116||Nathaniel Williams, et al., "Automatic Image Alignment for 3D Environment Modeling," Bibliography, Presented at the Siggraph conference, Aug. 11, 2004.|
|117||Nathaniel Williams, et al., "Automatic Image Alignment for 3D Environment Modeling," Department of Computer Science, University of North Carolina at Chapel Hill, 2004.|
|118||Nhat Xuan Nguyen, "Numerical Algorithms for Image Supreresolution," http://citeseer.ist.psu.edu/nguyen00numerical.html, 2000.|
|119||Paul Rademacher, "Ray Tracing: Graphics for the Masses," ACM, New York, vol. 3, Issue 4, p. 3-7, 1997.|
|120||Pedro F. Felzenswalb, et al., "Pictorial Structures for Object Recognition," Artificial Intelligence Lab, Massachusetts Institute of Technology, Computer Science Department, Cornell University, 2003.|
|121||Peter K. Allen, et al., "3D Modeling of Historic Sites Using Range and Image Data," Dept. of Computer Science, Columbia University, Dept. of Computer Science, Hunger College, May 16, 2008.|
|122||Richard A. Robb, et al., "Adaptive Piece-wise. Registration for Automated Fusion of Point Cloud Coordinates and Anatomic Volume Images," http://www.mayoclinictechnology.com/tc/software/MMV-04-247-565.html, 2004.|
|123||S.T. Dijkman, et al., "Semi Automatic Registration of Laser Scanner Data," International Archives of Photogrammetry Remote Sensing and Spatial Information Sciences, Natural Resources, Canada, vol. 34, Part 5, . 12-17, 2002.|
|124||*||Sato, Yukio et al, "Three-Dimensional Shape Reconstruction by Active Rangefinder," Proc. CVPR., IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, Jun. 1993, pp. 142-147.|
|125||*||Scanalyze: a system for aligning and merging range data; http://graphics.standford.edu/software/scanalyze; dated Dec. 9, 2002; pp. 1-7.|
|126||*||Soucy, M. et al, "A general surface approach to the integration of a set of range views"-Pattern Analysis and Machine Intelligence, vol. 17, No. 4,, IEEE Transactions on, Apr. 1995, pp. 34-358.|
|127||*||Soucy, M. et al, "A general surface approach to the integration of a set of range views"—Pattern Analysis and Machine Intelligence, vol. 17, No. 4,, IEEE Transactions on, Apr. 1995, pp. 34-358.|
|128||Steven Alexander Sablerolle, "Automatic Registration of Laser Scanning Data and Colour Images," TU Delft, Faculty of Civil Engineering, Final Presentation Msc. Geomatics, Oct. 31, 2006.|
|129||Steven Alexander Sablerolle, "Automatic Registration of Laser Scanning Data and Colour Images," TUDelft, Oct. 2006.|
|130||Steven Sablerolle, "Graduation Research Report," TU Delft, Sep. 2006.|
|131||T. Rabbani, et al., "Segmentation of Point clouds Using Smoothness Constraint," ISPRS Commission V Symposium ‘Image Engineering and Vision Metrology,’ 2006.|
|132||T. Rabbani, et al., "Segmentation of Point clouds Using Smoothness Constraint," ISPRS Commission V Symposium 'Image Engineering and Vision Metrology,' 2006.|
|133||Tahir Rabbani Shah, "Automatic Reconstruction of Industrial Installations Using Point Clouds and Images," Geodesy 6.2, Nederlandse Commissie voor Geodesie Netherlands Geodetic Commission, Delft, May 2006.|
|134||Tahir Rabbani, et al., "Efficient Hough Transform for Automatic detection of Cylinders in Point Clouds," Workshop "Laser Scanning 2005," Enschede, the Netherlands, Sep. 12-14, 2005.|
|135||Wikipedia, the free encyclopedia, "Level of Detail," http://en.wikipedia.org/wiki/Level_of_detail_(programming), May 23, 2008.|
|136||Yu Lifeng, et al., "Multi-Modality Medical Image Fusion Based on Wavelet Pyramid and Evaluation," Peking University, Beijing China, 1994.|
|137||*||Zippered Polygon Meshes from Range Images by Greg Turk et al.; Proc. SIGGRAPH '94, Jul. 1994; pp. 1-8.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US8207964 *||Feb 22, 2008||Jun 26, 2012||Meadow William D||Methods and apparatus for generating three-dimensional image data models|
|US8379191 *||Jun 21, 2005||Feb 19, 2013||Leica Geosystems Ag||Scanner system and method for registering surfaces|
|US8436901 *||Feb 20, 2009||May 7, 2013||Id. Fone Co., Ltd.||Field monitoring system using a mobile terminal and method thereof|
|US8558848||Mar 9, 2013||Oct 15, 2013||William D. Meadow||Wireless internet-accessible drive-by street view system and method|
|US8600713 *||Sep 15, 2011||Dec 3, 2013||National Central University||Method of online building-model reconstruction using photogrammetric mapping system|
|US8884950||Jul 29, 2011||Nov 11, 2014||Google Inc.||Pose data via user interaction|
|US8890866||Mar 9, 2013||Nov 18, 2014||Visual Real Estate, Inc.||Method and apparatus of providing street view data of a comparable real estate property|
|US8902226||Mar 12, 2013||Dec 2, 2014||Visual Real Estate, Inc.||Method for using drive-by image data to generate a valuation report of a selected real estate property|
|US9098870||Sep 10, 2013||Aug 4, 2015||Visual Real Estate, Inc.||Internet-accessible real estate marketing street view system and method|
|US9134339||Sep 24, 2014||Sep 15, 2015||Faro Technologies, Inc.||Directed registration of three-dimensional scan measurements using a sensor unit|
|US9311396||Mar 9, 2013||Apr 12, 2016||Visual Real Estate, Inc.||Method of providing street view data of a real estate property|
|US9311397||Mar 9, 2013||Apr 12, 2016||Visual Real Estates, Inc.||Method and apparatus of providing street view data of a real estate property|
|US9367914 *||May 16, 2013||Jun 14, 2016||The Johns Hopkins University||Imaging system and method for use of same to determine metric scale of imaged bodily anatomy|
|US9377298 *||Apr 4, 2014||Jun 28, 2016||Leica Geosystems Ag||Surface determination for objects by means of geodetically precise single point determination and scanning|
|US9384277||May 28, 2012||Jul 5, 2016||Visual Real Estate, Inc.||Three dimensional image data models|
|US9528834||Dec 15, 2014||Dec 27, 2016||Intelligent Technologies International, Inc.||Mapping techniques using probe vehicles|
|US20100118116 *||Jun 28, 2007||May 13, 2010||Wojciech Nowak Tomasz||Method of and apparatus for producing a multi-viewpoint panorama|
|US20110001795 *||Feb 20, 2009||Jan 6, 2011||Hyun Duk Uhm||Field monitoring system using a mobil terminal|
|US20110032507 *||Jun 21, 2005||Feb 10, 2011||Leica Geosystems Ag||Scanner system and method for registering surfaces|
|US20120150573 *||Dec 13, 2010||Jun 14, 2012||Omar Soubra||Real-time site monitoring design|
|US20120265494 *||Sep 15, 2011||Oct 18, 2012||National Central University||Method of Online Building-Model Reconstruction Using Photogrammetric Mapping System|
|US20130321583 *||May 16, 2013||Dec 5, 2013||Gregory D. Hager||Imaging system and method for use of same to determine metric scale of imaged bodily anatomy|
|US20140298666 *||Apr 4, 2014||Oct 9, 2014||Leica Geosystems Ag||Surface determination for objects by means of geodetically precise single point determination and scanning|
|US20150172628 *||Jun 30, 2011||Jun 18, 2015||Google Inc.||Altering Automatically-Generated Three-Dimensional Models Using Photogrammetry|
|USRE45264 *||Apr 11, 2013||Dec 2, 2014||Visual Real Estate, Inc.||Methods and apparatus for generating three-dimensional image data models|
|U.S. Classification||342/357.31, 345/419, 348/218.1, 345/629|
|International Classification||G01S19/42, G01C15/00, G01S17/08, G01S19/48, G01S17/02, G06T15/00, H04N5/225, G01S5/14, G09G5/00|
|Cooperative Classification||G01S19/42, G01C15/00, G01S17/08, G01C11/02, G01S17/023|
|European Classification||G01S19/42, G01C11/02, G01S17/02C, G01C15/00|
|May 8, 2008||AS||Assignment|
Owner name: SQUARE 1 BANK, NORTH CAROLINA
Free format text: SECURITY INTEREST;ASSIGNOR:INTELISUM, INC.;REEL/FRAME:020930/0037
Effective date: 20070518
Owner name: SQUARE 1 BANK,NORTH CAROLINA
Free format text: SECURITY INTEREST;ASSIGNOR:INTELISUM, INC.;REEL/FRAME:020930/0037
Effective date: 20070518
|Dec 20, 2011||FPAY||Fee payment|
Year of fee payment: 8
|Feb 12, 2016||REMI||Maintenance fee reminder mailed|
|Jul 6, 2016||LAPS||Lapse for failure to pay maintenance fees|