WO1998054674A1 - Combining digital images - Google Patents

Combining digital images Download PDF

Info

Publication number
WO1998054674A1
WO1998054674A1 PCT/US1998/011042 US9811042W WO9854674A1 WO 1998054674 A1 WO1998054674 A1 WO 1998054674A1 US 9811042 W US9811042 W US 9811042W WO 9854674 A1 WO9854674 A1 WO 9854674A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
source images
source
data
Prior art date
Application number
PCT/US1998/011042
Other languages
French (fr)
Inventor
Roy T. Hashimoto
Original Assignee
Enroute, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Enroute, Inc. filed Critical Enroute, Inc.
Priority to EP98923862A priority Critical patent/EP0983574A1/en
Priority to AU76053/98A priority patent/AU7605398A/en
Priority to JP11500991A priority patent/JP2000512419A/en
Publication of WO1998054674A1 publication Critical patent/WO1998054674A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/35Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods

Definitions

  • the present invention relates to combining digital images.
  • Digital images typically comprise two-dimensional arrays of picture elements (pixels) and may be, for example, digitized photographs or computer-generated images.
  • Many applications exist for combining digital images including applications for determining camera motion between video frames to stabilize video images, for relating or recognizing the content of different images, for aligning images for mosaicing, for high-resolution enhancement, and for building detailed models for virtual reality applications. Further discussion of various applications are found in articles such as S.Mann & R.W. Picard, Video Orbits of the Projective Group: A New Perspective on Image Mosaicing, M.I.T. Media Laboratory Perceptual Computing Section Technical Report No. 338 (1995) and Richard Szeliski, Image Mosaicing for Tele-Reality Applications, Cambridge Research Laboratory Technical Report Series (CRL 94/2) (1994), both of which are incorporated by reference.
  • Combining images typically requires "registering" pairs of images, which matches two or more images containing overlapping scenes and describes the correspondence of one to another.
  • the correspondence enables the images to be combined by mapping the image data into a common image space using any of a variety of transforms, such as affine and projective transforms. As described in the Mann &
  • affine methods are simpler and are acceptable approximations when the correspondence between pictures is high or the images have a small field of view, or the content of the image is planar.
  • Projective transform methods are more complex but can produce results that are mathematically more accurate for images acquired from a fixed camera location.
  • Existing projective transform methods typically register a first image with a second by determining transform parameters corresponding to a two- dimensional projective transformation:
  • Equation 1 where (u,v) are the coordinates in an image space of a pixel of the first image and (u ⁇ v') are the coordinates of the pixel mapped into an image space of the second image.
  • This transform has eight parameters, or degrees of freedom (m 0 ,...,m 7 ). Solving for the eight degrees of freedom typically requires a non-linear approach, which can be computationally expensive and is not guaranteed to produce a correct result.
  • the invention features a computer-implemented method for combining related source images, each represented by a set of digital data, by determining three-dimensional relationships between data sets representing related source images and creating a data set representing an output image by combining the data sets representing the source images in accordance with the determined three- dimensional relationships.
  • Each of the source images and the output image has a corresponding image space
  • determining three-dimensional relationships between source images further includes determining three-dimensional transformations of the source image spaces to the output image space.
  • the invention features a memory device storing computer readable instructions for aiding a computer to perform the above method.
  • the invention features an apparatus to combine related source images, each represented by a set of digital data, comprising a storage medium to store related source images, each represented by a set of digital data, and a processor operatively coupled to the storage medium and configured to perform the above method.
  • a storage medium to store related source images, each represented by a set of digital data
  • a processor operatively coupled to the storage medium and configured to perform the above method.
  • the invention determines projective transform parameters for a three- dimensional projective mapping, requiring solving for only five variables (degrees of freedom) rather than the eight required for two-dimensional projective mappings.
  • the invention produces a solution that is always physically realizable, and because the dimensionality of the problem is reduced, a solution may be obtained more quickly.
  • the parameters may be chosen to be directly related to how the images are acquired, solving for the parameters is readily simplified by further reducing the number of degrees of freedom if there are known constraints on the image acquisition, such as using a single focal length lens for multiple images or restricting motion (such as if the images are all acquired using a camera mounted on a tripod and allowing only panning).
  • Figure 1 illustrates a flow diagram for combining images.
  • Figure 2 illustrates a computer system
  • FIGS 3 and 4 illustrate images to be combined.
  • a digital image has corresponding camera parameters that describe a position of a camera relative to a scene such that the scene is viewed from the camera as it appears in the image.
  • Each image further has a local image space defining a coordinate space in which the points in the image scene may be identified.
  • the relationship between two images containing overlapping scenes can be described by a three-dimensional transformation, which describes the required reorienting of the camera from a first orientation, from which the scene is viewed as shown in the first image, to a second orientation, from which the scene is viewed as shown in the second image.
  • An alternative description of the transformation is that it maps a point in the first image space to a point in the second image space, where both the original and mapped points correspond to the same point in the scene.
  • source images to be combined in an output image are accessed in a computer system (step 110).
  • the source images are digital images, which may be, for example, digitized photographs or computer generated images.
  • the computer system may include special purpose hardware, a general purpose computer running application software, or a combination of both.
  • the invention is implemented in a computer program executing in a general purpose computer.
  • Figure 2 illustrates an appropriate computer system 200, including a CPU 210, a RAM 220, and an I/O controller 230, coupled by a CPU bus 240.
  • the I/O controller 230 is also coupled by an I/O bus 250 to input devices such as a keyboard 260 and a mouse 270, and output devices such as a monitor 280.
  • a pixel of a digital image generally includes data describing its color. Pixel data from the source images are eventually combined to create pixel data for the output image. Pixel data of monochrome images are single channel, having a single value indicating a degree of intensity. Pixel data of color images typically have multiple channels, having a value for each channel with the pixel color indicated by the combination of the channel values. For example, in the RGB (red-green-blue) color system, color data for a pixel will include a value for each of the red channel, the green channel, and the blue channel. To simplify computations involving color data, multichannel color data may be converted to a single value. Various conversion methods may be used. For example, the single value may be based on the value of the green channel, the average of the values of the color channels, or another combination of the values of the color channels. For example, as explained in FOLEY ET AL.,
  • luminance is a color characteristic derived from a weighted sum of the color channels, 0.30*R + 0.59*G + 0.11*B.
  • each of the source images must contain a scene that overlaps with the scene of at least one of the other source images.
  • the amount of overlap needed may vary depending on factors such as the content of the images, temporal scene changes, temporal camera parameter changes (for example, different focus, exposure, or point of view), and camera fidelity (for example, distortion and compression artifacts).
  • the greater the overlap the more accurate the output image will be. For example, while an output image can be created from source images having as little as 20% overlap, source images having closer to 50% overlap are likely to result in a more accurate output image.
  • each of the source images to be combined is taken from approximately the same camera location, although factors such as rotation, focus, exposure and zoom may vary.
  • the overlapping areas between pairs of source images are estimated (step 120). If the images received in step 110 have no available information about which images overlap, each pair of images is tested to determine overlapping areas. Alternatively, user-guided indication of overlap may be provided to simplify this process.
  • the pairs containing overlapping areas may be reflected in the order in which they are entered.
  • the set of images can be arranged as a single unbranching chain or as a single loop as shown in Figure 3, the invention can be implemented to require images to be input such that sequentially input images overlap.
  • Overlap for source images having more complex overlap relationships can be specified by having the user manually enter overlap information.
  • a user interface may be provided to prompt the user to type the information or to display a graphical user interface allowing the user to graphically indicate overlapping images.
  • Automatic and user-aided detection of overlap may be used in combination, and user input may be used to identify the overlapping area in an image pair, in addition to identifying the images that overlap.
  • 3_I is the inverse discrete Fourier transform.
  • the advantage of using the frequency domain to compute the cross-correlation is that a fast Fourier transform algorithm can be used to perform the operation more efficiently.
  • phase correlation When the frequency domain is used, a variant of regular correlation called phase correlation can be performed instead.
  • the Fourier transform of each image is normalized so that each complex value in the transform has unit magnitude (this is also equivalent to normalizing after multiplying the transform and the transform conjugate).
  • the advantage of phase correlation over regular correlation is that the windowing effects, which occur because the images are of finite size, are reduced. Further detail on phase correlation may be found in the 1996 article by Xin Tong and Krishna Nayak , Image Registration Using Phase Correlation (available at URL http://www-leland.stanford.edu/ ⁇ xin/ee262/report/report.html).
  • the phase correlation procedure takes the discrete Fourier transform of both images using an FFT (fast Fourier transform), multiplies the transforms element by element (taking the conjugate of the second before multiplication), computes the pure phase by normalizing each complex value to a magnitude of one, and takes the inverse Fourier transform.
  • the estimate of the best translation to use for overlap is then determined by scanning the result for the location of the maximum magnitude value.
  • An alternative to using an input image directly in correlation is to use a level from an image pyramid, such as a Laplacian pyramid described in articles such as Burt & Adelson, The Laplacian Pyramid as a Compact Image Code, IEEE Transactions on
  • Laplacian pyramids from source images, where levels of varying resolution encode the source image content at varying spatial frequencies.
  • a sample of the lowest resolution level is based on the average of the image data of the region in the source image corresponding to that sample.
  • a sample of the next resolution level is based on the difference between the data of the corresponding sample of the lower resolution level and the average of the data of the corresponding region in the source image.
  • a pyramid is initialized by making the image itself a level. This is a valid single-level pyramid.
  • the number of levels is incremented by convolving the current lowest resolution level with an averaging filter to form a moving-average image, replacing the current lowest resolution level with the current lowest resolution level minus the moving-average (pixel by pixel), and creating the new lowest resolution level by subsampling the moving-average. This process is repeated to create as many levels as desired in the pyramid.
  • Correlation to find an estimate of image overlap can be based on corresponding Laplacian pyramid levels of the input images instead of being based directly on the input images themselves.
  • One advantage of using the pyramid data is that performing correlation over the lower resolution pyramid data is faster than over the full resolution image data.
  • Another advantage is that the original image data generally gives more weight (importance) to the brighter areas than to the darker areas, which is not desirable.
  • the Laplacian levels other than the lowest resolution level are signed images with approximately zero mean values, the bright areas in the original input images are not given more weight than darker areas, thus avoiding the need for normalization that would otherwise be required.
  • a discussion of using Laplacian pyramids for image correlation may be found in Hansen et al., Real-time Scene Stabilization and Mosaic Construction, 1994 IMAGE UNDERSTANDING WORKSHOP (1994). 3. Source Image Correction
  • the source images are prepared for combination by correcting the images so that the overlapping portions of source images are more closely matched (step 130), which in turn can produce a more accurate output image.
  • the specific correction operations may vary for different applications, and if desired, some may be performed prior to determining the overlapping areas of the images (step 120), which may remove distortions that will allow the overlapping areas to be more easily identified.
  • the minimizing operations may be restricted to the overlapping portions rather than being performed on entire images to reduce the amount of required computation.
  • An example of a correction operation corrects nonuniform magnification produced in an image by the lens.
  • the resulting image distortion is typically radial lens distortion, which changes the magnification in the image based on the distance from the center of projection.
  • the center of projection is the point in the image that matches where the optical axis of the lens intersects the image plane.
  • Magnification that decreases with distance is called barrel distortion; magnification that increases with distance is called pincushion distortion.
  • Radial distortion can be modeled with a simple polynomial, as described in articles such as R.Y.
  • Another example of a correction operation corrects nonuniform scaling caused by perspective projections, causing objects near the edges of the image to appear larger than the same objects would appear near the center. This effect is increased as the field of view widens. In these cases, it may be helpful to reproject one or both images before doing correlation, normalizing the scaling in an area of proposed overlap.
  • Reprojection can be time-consuming, but it can be performed automatically without user interaction. If performed prior to correlation (step 120), the overlapping areas may be easier to identify. However, correlation (step 120) generally works acceptably well without reprojection, which is an optional operation.
  • a special case of this technique is to reproject into cylindrical coordinates, which normalizes the scaling across the entire images if the cylindrical axis is vertical and the motion is purely pan, or if the cylindrical axis is horizontal and the motion is purely tilt.
  • Source images may differ in brightness level. This may result, for example, if images are captured with autoexposure camera settings.
  • One simple approach to normalize the brightness determines the average color data for pixels of the overlapping region in a source image and normalizes the color data values A ⁇ for each pixel (I, j) in that region to the calculated average, .
  • a more sophisticated procedure may use a model that accounts for nonlinearities in mapping brightness to data values, and for differences resulting from different device types (for example, CCD and film) used in acquiring the source images, as well as differences among devices of the same type.
  • Another correction operation addresses varying focus of source images. If the overlapping portion is in focus in one image and out of focus in the other, blurring tends to bring them closer to matching.
  • One way to minimize the variation in focus in source images to be combined is by blurring the source images slightly with a filter to reduce high-frequency noise, producing the effect of putting all source images slightly out of focus.
  • Yet another possible correction operation may correct for vignetting, an effect describing the variation in the amount of light gathered as the incident direction moves off the optical axis.
  • This effect explained in articles such as Just what is a Good Lens obviously?, which is found at http://web.ait.ac.nz/homepages/staff/rvink/optics.html, is a darkening of the comers and edges of the image relative to the center. Correction involves modeling the vignetting function and multiplying by its reciprocal across the image.
  • transformation parameters are determined from each source image to a target image (step 140). For example, if images A and B contain overlapping scenes, image B may be the target image where the transformation parameters map pixels of image A into the image space of image B.
  • the Mann & Picard article discusses derivation of two-dimensional transformation parameters.
  • three-dimensional transformation parameters may be derived.
  • an image has corresponding three-dimensional camera parameters defining a camera position from which a scene appears as shown in the image.
  • Equations 4 and 5 assume that the images are captured from approximately the same camera location, and hold for both source and target images in their own local image spaces.
  • the difference between the local image spaces can be expressed as a rotation:
  • a is the vertical field of view
  • v max - v m is the vertical image extent
  • z is the projection depth.
  • the horizontal field of view may be calculated by replacing the vertical image extent with the horizontal image extent.
  • the projection depth may be calculated from a known field of view with:
  • the projection depth is the lens focal length expressed in the same units as the image extent.
  • the image extent is the physical size of the active area of the camera image plane
  • the projection depth is simply the focal length of the lens. If the image extent has been normalized, e.g. from -1 to 1 , the corresponding projection depth can be derived from the lens focal length using the ratio of the normalized image extent to the size of the camera image plane active area.
  • the three-dimensional mapping may have as many as five degrees of freedom: up to three for rotation, one for z s (the depth of the source projection plane), and one for z, (the depth of the target projection plane).
  • the rotation matrix R can be parameterized in a number of different ways. For example, using Euler angles, the matrix R may be factored into z (roll) ( ⁇ ), x (tilt) ( ⁇ ), and y (pan) ( ⁇ ) rotations:
  • This parameterization reflects the ways camera motion is commonly constrained in creating images, such as pan only, pan and tilt only, or tilt only.
  • corresponding camera parameters are readily determined and the degrees of freedom are decreased, simplifying the task of determining the three-dimensional transformation.
  • a Gaussian pyramid is created for each of the pair of images.
  • a Gaussian pyramid like a Laplacian pyramid, is a multiresolution data set, and if desired, may be created only for the estimated overlapping areas of each image for a pair of images.
  • Each level of a Gaussian pyramid is a smoothed and subsampled version of the next higher resolution layer.
  • the best parameters mapping the source image into the target image space such that the overlapping portions overlap are determined sequentially for increasing resolution levels of the Gaussian pyramids, using the parameters of the previous level as the initial estimate for the parameter values on the current level. If the source and target images are overlapping source images evaluated in step 120, an initial estimation of the three-dimensional transformation parameters between the lowest resolution levels of the images may be determined by converting the translation parameters found in the overlap estimation (step 120) into rotational parameters where:
  • a solution may be found at each level by optimizing an error function that is minimized when the overlapping portions of the source and the target images are aligned when the source image is remapped by the transformation parameters.
  • One possible error function explained in the Szeliski article is the least squared difference between the intensity data of pixels of the overlapping portions in the remapped source image and the target image:
  • Equation 9 x 2
  • a s and A are the source and target image intensity functions (interpolating between samples)
  • (u s ,v s ) are the source image coordinates
  • (uridv t ) are the resulting target image coordinates obtained by remapping the source image coordinates (u s ,v s ) using the estimated transformation parameters.
  • is a weighting that may be set to 1 , and will be omitted in subsequent equations.
  • This optimization problem may be solved by methods such as gradient descent, inverting the Hessian matrix, or a combination method like Levenberg-Marquardt, which are discussed in greater detail in references such as PRESS ET A , NUMERICAL RECIPES IN C 540-47 (Cambridge University Press, 1988). These solution methods require partial derivatives of the error metric with respect to the transformation parameters. For some parameter p, the partial is:
  • ⁇ ?A t ⁇ ?A t ⁇ it and ⁇ are the spatial derivatives of the pixel values of the target image at
  • optical flow treats the source and target images as images of the same scene separated in time and assumes that although the scene moves with respect to the camera between the time of capturing the source image and the target image, the brightness values remain constant.
  • A(u, v, t) A(u + ⁇ u, v + ⁇ v, t + ⁇ t)
  • This formulation takes the partial derivatives of the error metric with respect to ⁇ . ⁇ v t the transformation parameters. The major component of these partials is ⁇ and ⁇ ,
  • the derivative matrix with respect to ⁇ This can be computed and saved, along with the derivative matrices for the other parameters at the beginning of each iteration. Then, the product of each source image pixel (u s ,v s ) and the derivative matrix is calculated.
  • M To compute M from ° w v and w t d ⁇ ' d ⁇ ' ' d ⁇ '
  • the transformation parameters are determined using an iterative optimization method such as those described in Press' NUMERICAL RECIPES IN C, which conclude when a termination condition is met. For example, a termination condition is met.
  • condition may be met if little or no progress is being made in decreasing ⁇ in subsequent iterations, or if a predetermined limit of time or number of iterations is exceeded.
  • the solution at each layer is used as the initial estimate for the solution at the next higher resolution layer until the solution is found for the highest resolution layer
  • the image data from the source images can be reprojected onto an output plane to form the output image (step 150).
  • a prerequisite to the rendering of the output image is to transform all input images (which have, to this point, been related only to one other by a set of pairwise transformations) into a single, common output space.
  • a image may be arbitrarily chosen as a "transform reference image", whose coordinate system is chosen as the output space for this part of the process. Each other image will be positioned relative to this image. Choosing a relative transform for each image to the transform reference image is a process of finding a path via the pairwise relations from the image to the transform reference image.
  • Local transforms from the pairwise relations are accumulated (with matrix multiplication) along the path to produce the complete transform to the space of the transform reference image.
  • a given image can frequently be transformed into the space of the transform reference image through several pathways. Referring to Figure 4, each of images A, B, and C overlap, so there are pairwise relations between A and B, A and C, and B and C. If A is chosen to be the transform reference image, the transform of B into the output space may be determined using either the single path B to A, or with the composite path B to C then C to A. If the all the pairwise transforms contain no error, then different paths between two images should yield the same output space transformation. In the presence of error, however, some paths may be better than others.
  • Another technique that does not depend on prior knowledge of the topology is based on the generation of a minimum spanning tree from the graph of images, where the cost of each link is the negative of some confidence metric of that link.
  • a minimum spanning tree is defined on a connected graph where each edge has a specified cost, and is the acyclic connected subset graph that has the lower cost sum.
  • a discussion and algorithm for generating the minimum spanning tree is in SEDGEWICK, ALGORITHMS, pp. 452-61 (2d ed. 1988). The process removes pairwise relations so that there is exactly one path between any two images. Furthermore, the sum of the confidence metrics of the remaining pairwise relations is at least as high as any other spanning tree.
  • One confidence metric used is the number of pixels of overlap between each pair of images. Other confidence metrics can be employed including those based on the result of the projective fit.
  • Image data reprojection is not limited to output planes, which are theoretically limited to containing less than a 180 degree field of view and are practically limited to about 120 degrees to avoid objectionable perspective effects.
  • Other projections such as into cylindrical or azimuthal coordinates, introduce distortion but can effectively display fields of view approaching or exceeding 180 degrees.
  • the entire gamut of map projections which are normally used to display the outer surface of a sphere, can be used.
  • Map projections are described in greater detail in articles such as Peter H. Dana's article, Map Projection Overview, available at http://www.utexas.edu/depts/grg/gcraft/notes/mapproj/mapproj.html.
  • One possible reprojection is to project the output image onto the interior faces of a cube.
  • An interactive panoramic viewer with no view direction constraints can be created by rendering a cube using those projections as texture maps. This viewing mechanism can be directly accelerated by using hardware designed for three- dimensional applications.
  • any blending technique may be used to create image data for the output image in the overlapping portions.
  • the images will differ noticeably (even when camera and lens settings are identical), which suggests using a blending technique that increases the consistency between pixels of overlapping portions of the images, as well as in the surrounding portions.
  • One such technique uses a weighted average of pixel data from source images as the pixel data for the output image, where the weight accorded to data from a pixel of a particular image is proportional to the distance of that pixel to the edge (in both dimensions) in that image.
  • Output rendering may be inverse or forward.
  • Inverse rendering evaluates each source image to determine data for a pixel in the output image, determining which source images contribute to a specific output pixel, summing weighted pixel data from appropriate source images, normalizing to a unit weight, and storing the result, then moving to the next output pixel.
  • Forward rendering evaluates each source image just once to determine data for the pixels of the output image by transforming data for each source image pixel and accumulating weighted data and the weights into buffers for the output image. After evaluating all source images, the weighted sums for the output image data are normalized to a unit weight.
  • the invention may interface with hardware or software capable of rendering textured polygons to accelerate the rendering process.
  • Each image can be converted into a texture map, and a rectangle enclosing the image extent and transformed with the transformation parameters can be passed to the renderer. 6. Making Transformations Globally Consistent
  • a potential problem in creating final renderings results from the fact that the above transformations are determined only on a pairwise basis.
  • the transformation parameters are typically not perfectly estimated, the invertible and transitive properties may not be satisfied.
  • the relationship between two images can be determined through more than one path. For example, if image A overlaps image B, image B overlaps image C, and image A overlaps image C, as shown in Figure 4, possible paths from A to C include MAC and MABMB . Selecting just one path can cause poor results when the relationship is not the most direct one. This is particularly evident when the image sequence forms a large loop, as it does when constructing fully circular views.
  • the mapping error builds up between the successive combinations of images in the loop, often with the result that a visible mismatch exists between the last image and the first image.
  • An alternative is to treat the transformation parameters computed on a local basis as being estimates for the global parameters.
  • the Hessian matrix at the parameter values found by the fit is the inverse of the covariance matrix for the fitted parameters.
  • the covariance matrix is a characterization of the distribution of the parameter estimate, and thus may be used directly to determine the likelihood that the actual parameter values are within some specified epsilon of the estimated values, assuming a multivariate normal distribution.
  • the covariance matrix can be used as a measure of the relative confidence in the estimate of each parameter value.
  • the global parameters are derived by adjusting the estimates to satisfy the global constraints in a way that changes high confidence estimates less than low confidence estimates. Assuming that the distribution is the multivariate normal distribution, the probability distribution is proportional to:
  • Constrained optimization problems of this type can be solved using Lagrangian multipliers, setting the partial derivatives with respect to each parameter and ⁇ of the following expression to zero, and solving the resulting system of equations.

Abstract

A computer-implemented method combines related source images, each represented by a set of digital data, by determining three-dimensional relationships between data sets representing related source images and creating a data set representing an output image by combining the data sets representing the source images in accordance with the determined three-dimensional relationships.

Description

COMBINING DIGITAL IMAGES
Background
The present invention relates to combining digital images.
Digital images typically comprise two-dimensional arrays of picture elements (pixels) and may be, for example, digitized photographs or computer-generated images. Many applications exist for combining digital images, including applications for determining camera motion between video frames to stabilize video images, for relating or recognizing the content of different images, for aligning images for mosaicing, for high-resolution enhancement, and for building detailed models for virtual reality applications. Further discussion of various applications are found in articles such as S.Mann & R.W. Picard, Video Orbits of the Projective Group: A New Perspective on Image Mosaicing, M.I.T. Media Laboratory Perceptual Computing Section Technical Report No. 338 (1995) and Richard Szeliski, Image Mosaicing for Tele-Reality Applications, Cambridge Research Laboratory Technical Report Series (CRL 94/2) (1994), both of which are incorporated by reference.
Combining images typically requires "registering" pairs of images, which matches two or more images containing overlapping scenes and describes the correspondence of one to another. The correspondence enables the images to be combined by mapping the image data into a common image space using any of a variety of transforms, such as affine and projective transforms. As described in the Mann &
Picard article, affine methods are simpler and are acceptable approximations when the correspondence between pictures is high or the images have a small field of view, or the content of the image is planar. Projective transform methods are more complex but can produce results that are mathematically more accurate for images acquired from a fixed camera location. Existing projective transform methods typically register a first image with a second by determining transform parameters corresponding to a two- dimensional projective transformation:
Figure imgf000003_0001
Equation 1 where (u,v) are the coordinates in an image space of a pixel of the first image and (u\ v') are the coordinates of the pixel mapped into an image space of the second image. This transform has eight parameters, or degrees of freedom (m0,...,m7). Solving for the eight degrees of freedom typically requires a non-linear approach, which can be computationally expensive and is not guaranteed to produce a correct result.
Summary
In general, in one aspect, the invention features a computer-implemented method for combining related source images, each represented by a set of digital data, by determining three-dimensional relationships between data sets representing related source images and creating a data set representing an output image by combining the data sets representing the source images in accordance with the determined three- dimensional relationships.
Certain implementations of the invention include one or more of the following features. Each of the source images and the output image has a corresponding image space, and determining three-dimensional relationships between source images further includes determining three-dimensional transformations of the source image spaces to the output image space. The output image space may be the image space of a source image. Determining a three-dimensional transformation of a source image space to the target image space further includes determining parameters describing the three- dimensional transformation. The parameters describe a camera orientation, a distance between the camera and the source image, and a distance between the camera and the target image.
In general, in another aspect, the invention features a memory device storing computer readable instructions for aiding a computer to perform the above method.
In general, in another aspect, the invention features an apparatus to combine related source images, each represented by a set of digital data, comprising a storage medium to store related source images, each represented by a set of digital data, and a processor operatively coupled to the storage medium and configured to perform the above method. Among the advantages of the invention are the following. The invention may be used to create composite images with wide fields of view, to create fully surrounding environments, and to enhance resolution. Further, source images may be merged without identification of specific corresponding features.
The invention determines projective transform parameters for a three- dimensional projective mapping, requiring solving for only five variables (degrees of freedom) rather than the eight required for two-dimensional projective mappings. The invention produces a solution that is always physically realizable, and because the dimensionality of the problem is reduced, a solution may be obtained more quickly. Further, because the parameters may be chosen to be directly related to how the images are acquired, solving for the parameters is readily simplified by further reducing the number of degrees of freedom if there are known constraints on the image acquisition, such as using a single focal length lens for multiple images or restricting motion (such as if the images are all acquired using a camera mounted on a tripod and allowing only panning).
Other advantages and features of the invention will become apparent from the following description and from the claims.
Brief Description of the Drawings
Figure 1 illustrates a flow diagram for combining images.
Figure 2 illustrates a computer system.
Figures 3 and 4 illustrate images to be combined.
Detailed Description
Represented in three dimensions, a digital image has corresponding camera parameters that describe a position of a camera relative to a scene such that the scene is viewed from the camera as it appears in the image. Each image further has a local image space defining a coordinate space in which the points in the image scene may be identified. The relationship between two images containing overlapping scenes can be described by a three-dimensional transformation, which describes the required reorienting of the camera from a first orientation, from which the scene is viewed as shown in the first image, to a second orientation, from which the scene is viewed as shown in the second image. An alternative description of the transformation is that it maps a point in the first image space to a point in the second image space, where both the original and mapped points correspond to the same point in the scene.
1. Image Acquisition
Referring to Figure 1, source images to be combined in an output image are accessed in a computer system (step 110). The source images are digital images, which may be, for example, digitized photographs or computer generated images.
The computer system may include special purpose hardware, a general purpose computer running application software, or a combination of both. Preferably, the invention is implemented in a computer program executing in a general purpose computer. Figure 2 illustrates an appropriate computer system 200, including a CPU 210, a RAM 220, and an I/O controller 230, coupled by a CPU bus 240. The I/O controller 230 is also coupled by an I/O bus 250 to input devices such as a keyboard 260 and a mouse 270, and output devices such as a monitor 280.
A pixel of a digital image generally includes data describing its color. Pixel data from the source images are eventually combined to create pixel data for the output image. Pixel data of monochrome images are single channel, having a single value indicating a degree of intensity. Pixel data of color images typically have multiple channels, having a value for each channel with the pixel color indicated by the combination of the channel values. For example, in the RGB (red-green-blue) color system, color data for a pixel will include a value for each of the red channel, the green channel, and the blue channel. To simplify computations involving color data, multichannel color data may be converted to a single value. Various conversion methods may be used. For example, the single value may be based on the value of the green channel, the average of the values of the color channels, or another combination of the values of the color channels. For example, as explained in FOLEY ET AL.,
COMPUTER GRAPHICS: PRINCIPLES AND PRACTICE 589 (Addison-Wesley 2d ed. 1996), "luminance" is a color characteristic derived from a weighted sum of the color channels, 0.30*R + 0.59*G + 0.11*B.
To be combined, each of the source images must contain a scene that overlaps with the scene of at least one of the other source images. The amount of overlap needed may vary depending on factors such as the content of the images, temporal scene changes, temporal camera parameter changes (for example, different focus, exposure, or point of view), and camera fidelity (for example, distortion and compression artifacts). In general, the greater the overlap, the more accurate the output image will be. For example, while an output image can be created from source images having as little as 20% overlap, source images having closer to 50% overlap are likely to result in a more accurate output image.
The present invention further assumes that each of the source images to be combined is taken from approximately the same camera location, although factors such as rotation, focus, exposure and zoom may vary.
Finding Overlapping Areas in Source Images
Referring again to Figure 1 , the overlapping areas between pairs of source images are estimated (step 120). If the images received in step 110 have no available information about which images overlap, each pair of images is tested to determine overlapping areas. Alternatively, user-guided indication of overlap may be provided to simplify this process.
For example, the pairs containing overlapping areas may be reflected in the order in which they are entered. Thus, if the set of images can be arranged as a single unbranching chain or as a single loop as shown in Figure 3, the invention can be implemented to require images to be input such that sequentially input images overlap. Overlap for source images having more complex overlap relationships, as shown in Figure 4, can be specified by having the user manually enter overlap information. For example, a user interface may be provided to prompt the user to type the information or to display a graphical user interface allowing the user to graphically indicate overlapping images. Automatic and user-aided detection of overlap may be used in combination, and user input may be used to identify the overlapping area in an image pair, in addition to identifying the images that overlap.
One possible overlap estimation method is cross-correlation between the images. The cross-correlation function
C(ij) = ∑∑I0(i-m,j-n)L(i,j) m n
Equation 2
where l0(i,j) and l,(i,j) are the image intensities, between two translated versions of the same image will tend to have a maximum at values for m and n indicating the best translation. This formulation shows computation of the cross-correlation in the spatial domain. The function may also be computed using the frequency domain:
C(i,j) = 3-,(3(l0(i,j))3*(l,(i,j))) Equation 3
where 3is the discrete Fourier transform, 3 is the complex conjugate of the discrete
Fourier transform, and 3_I is the inverse discrete Fourier transform. The advantage of using the frequency domain to compute the cross-correlation is that a fast Fourier transform algorithm can be used to perform the operation more efficiently.
When the frequency domain is used, a variant of regular correlation called phase correlation can be performed instead. In this variant, the Fourier transform of each image is normalized so that each complex value in the transform has unit magnitude (this is also equivalent to normalizing after multiplying the transform and the transform conjugate). The advantage of phase correlation over regular correlation is that the windowing effects, which occur because the images are of finite size, are reduced. Further detail on phase correlation may be found in the 1996 article by Xin Tong and Krishna Nayak , Image Registration Using Phase Correlation (available at URL http://www-leland.stanford.edu/~xin/ee262/report/report.html).
The phase correlation procedure takes the discrete Fourier transform of both images using an FFT (fast Fourier transform), multiplies the transforms element by element (taking the conjugate of the second before multiplication), computes the pure phase by normalizing each complex value to a magnitude of one, and takes the inverse Fourier transform. The estimate of the best translation to use for overlap is then determined by scanning the result for the location of the maximum magnitude value.
An alternative to using an input image directly in correlation is to use a level from an image pyramid, such as a Laplacian pyramid described in articles such as Burt & Adelson, The Laplacian Pyramid as a Compact Image Code, IEEE Transactions on
Communications 31:532-40 (1983). Briefly, known algorithms may be used to construct Laplacian pyramids from source images, where levels of varying resolution encode the source image content at varying spatial frequencies. Typically, a sample of the lowest resolution level is based on the average of the image data of the region in the source image corresponding to that sample. A sample of the next resolution level is based on the difference between the data of the corresponding sample of the lower resolution level and the average of the data of the corresponding region in the source image.
One method of creating a Laplacian pyramid is to increase the number of levels incrementally. A pyramid is initialized by making the image itself a level. This is a valid single-level pyramid. The number of levels is incremented by convolving the current lowest resolution level with an averaging filter to form a moving-average image, replacing the current lowest resolution level with the current lowest resolution level minus the moving-average (pixel by pixel), and creating the new lowest resolution level by subsampling the moving-average. This process is repeated to create as many levels as desired in the pyramid.
Correlation (including phase correlation) to find an estimate of image overlap can be based on corresponding Laplacian pyramid levels of the input images instead of being based directly on the input images themselves. One advantage of using the pyramid data is that performing correlation over the lower resolution pyramid data is faster than over the full resolution image data. Another advantage is that the original image data generally gives more weight (importance) to the brighter areas than to the darker areas, which is not desirable. In contrast, because the Laplacian levels other than the lowest resolution level are signed images with approximately zero mean values, the bright areas in the original input images are not given more weight than darker areas, thus avoiding the need for normalization that would otherwise be required. A discussion of using Laplacian pyramids for image correlation may be found in Hansen et al., Real-time Scene Stabilization and Mosaic Construction, 1994 IMAGE UNDERSTANDING WORKSHOP (1994). 3. Source Image Correction
Referring again to Figure 1 , the source images are prepared for combination by correcting the images so that the overlapping portions of source images are more closely matched (step 130), which in turn can produce a more accurate output image. The specific correction operations may vary for different applications, and if desired, some may be performed prior to determining the overlapping areas of the images (step 120), which may remove distortions that will allow the overlapping areas to be more easily identified. On the other hand, if the overlapping portions are determined first (step 120), the minimizing operations (step 130) may be restricted to the overlapping portions rather than being performed on entire images to reduce the amount of required computation.
An example of a correction operation corrects nonuniform magnification produced in an image by the lens. The resulting image distortion is typically radial lens distortion, which changes the magnification in the image based on the distance from the center of projection. (The center of projection is the point in the image that matches where the optical axis of the lens intersects the image plane.) Magnification that decreases with distance is called barrel distortion; magnification that increases with distance is called pincushion distortion. Radial distortion can be modeled with a simple polynomial, as described in articles such as R.Y. Tsai, An Efficient and Accurate Camera Calibration Technique, PROCEEDINGS IEEE COMPUTER VISION AND PATTERN RECOGNITION 364-74 (1986), and can be removed by resampling based on the polynomial model.
Another example of a correction operation corrects nonuniform scaling caused by perspective projections, causing objects near the edges of the image to appear larger than the same objects would appear near the center. This effect is increased as the field of view widens. In these cases, it may be helpful to reproject one or both images before doing correlation, normalizing the scaling in an area of proposed overlap.
Reprojection can be time-consuming, but it can be performed automatically without user interaction. If performed prior to correlation (step 120), the overlapping areas may be easier to identify. However, correlation (step 120) generally works acceptably well without reprojection, which is an optional operation.
A special case of this technique is to reproject into cylindrical coordinates, which normalizes the scaling across the entire images if the cylindrical axis is vertical and the motion is purely pan, or if the cylindrical axis is horizontal and the motion is purely tilt.
This is appropriate when the camera orientation is known to have those restrictions.
Another correction operation normalizes the brightness in the source images to be combined. Source images may differ in brightness level. This may result, for example, if images are captured with autoexposure camera settings. One simple approach to normalize the brightness determines the average color data for pixels of the overlapping region in a source image and normalizes the color data values A^ for each pixel (I, j) in that region to the calculated average,
Figure imgf000011_0001
. A more sophisticated procedure may use a model that accounts for nonlinearities in mapping brightness to data values, and for differences resulting from different device types (for example, CCD and film) used in acquiring the source images, as well as differences among devices of the same type.
Another correction operation addresses varying focus of source images. If the overlapping portion is in focus in one image and out of focus in the other, blurring tends to bring them closer to matching. One way to minimize the variation in focus in source images to be combined is by blurring the source images slightly with a filter to reduce high-frequency noise, producing the effect of putting all source images slightly out of focus.
Yet another possible correction operation may correct for vignetting, an effect describing the variation in the amount of light gathered as the incident direction moves off the optical axis. This effect, explained in articles such as Just what is a Good Lens Anyway?, which is found at http://web.ait.ac.nz/homepages/staff/rvink/optics.html, is a darkening of the comers and edges of the image relative to the center. Correction involves modeling the vignetting function and multiplying by its reciprocal across the image.
4. Determining Transformation Parameters
Referring to Figure 1 , transformation parameters are determined from each source image to a target image (step 140). For example, if images A and B contain overlapping scenes, image B may be the target image where the transformation parameters map pixels of image A into the image space of image B.
The Mann & Picard article discusses derivation of two-dimensional transformation parameters. Alternatively, three-dimensional transformation parameters may be derived. As described above, an image has corresponding three-dimensional camera parameters defining a camera position from which a scene appears as shown in the image. An ideal camera projects the scene before it onto a plane. Defining the camera in a local coordinate system such that it points along the z-axis, the projection onto the plane z = z0 is:
Figure imgf000012_0001
Equation 4 where (u, v) are coordinates defined in the plane defined by z=z0, (x, y, z*) are corresponding coordinates defined in 3-space, and w is the homogeneous variable. We can express the points in 3-space in terms of their image space coordinates (u,v):
0 0 0 0 wz ] - o o i -
0 0 0 0
where
Equation 5
Equations 4 and 5 assume that the images are captured from approximately the same camera location, and hold for both source and target images in their own local image spaces. The difference between the local image spaces can be expressed as a rotation:
1 0 0 0
0 1 0 0 w,u, X w,z, w,] = H*. L 5L 1 R 1
Z Z z 0 0 1
0 0 0 0
Eq uation 6
where (u„ vt) are the remapped image coordinates in the target image space, (us, vs) are the image coordinates in the source image space, and z is the unknown depth of the point in the scene that is imaged at (us, vs). This depth can be arbitrarily set to 1 because the results are invariant with respect to z. R is a rotation matrix that relates the local image spaces of the source and target images, and reprojects pixels of the source image into the image space of the target image.
The projection depth parameters zs and z, are related to the field of view of the imaging systems used to capture the source and destination images, respectively: vm„ - v„ = 2 arctan
2z where a is the vertical field of view, vmax - vm is the vertical image extent, and z is the projection depth. The horizontal field of view may be calculated by replacing the vertical image extent with the horizontal image extent. The projection depth may be calculated from a known field of view with:
„ _ 1 l v τ max — v γ nu n a tan —
2
Another interpretation of the projection depth is the lens focal length expressed in the same units as the image extent. Thus if the image extent is the physical size of the active area of the camera image plane, the projection depth is simply the focal length of the lens. If the image extent has been normalized, e.g. from -1 to 1 , the corresponding projection depth can be derived from the lens focal length using the ratio of the normalized image extent to the size of the camera image plane active area.
The three-dimensional mapping may have as many as five degrees of freedom: up to three for rotation, one for zs (the depth of the source projection plane), and one for z, (the depth of the target projection plane). The rotation matrix R can be parameterized in a number of different ways. For example, using Euler angles, the matrix R may be factored into z (roll) (θ), x (tilt) (ψ), and y (pan) (φ) rotations:
cosf? sinf? 0 0 1 0 0 0 cos > 0 - sinrp 0
-sin<9 cost? 0 0 0 cost sini/ 0 0 1 0 0
R(θ,ψ,φ) =
0 0 1 0 0 -sin(^ cost 0 sin^> 0 cosr? 0
0 0 0 1 0 0 0 1 0 0 0 1
Equation 7
This parameterization reflects the ways camera motion is commonly constrained in creating images, such as pan only, pan and tilt only, or tilt only. Thus, for source images known to result from cameras having such constrained motions, corresponding camera parameters are readily determined and the degrees of freedom are decreased, simplifying the task of determining the three-dimensional transformation.
A Gaussian pyramid is created for each of the pair of images. A Gaussian pyramid, like a Laplacian pyramid, is a multiresolution data set, and if desired, may be created only for the estimated overlapping areas of each image for a pair of images. Each level of a Gaussian pyramid is a smoothed and subsampled version of the next higher resolution layer.
The best parameters mapping the source image into the target image space such that the overlapping portions overlap are determined sequentially for increasing resolution levels of the Gaussian pyramids, using the parameters of the previous level as the initial estimate for the parameter values on the current level. If the source and target images are overlapping source images evaluated in step 120, an initial estimation of the three-dimensional transformation parameters between the lowest resolution levels of the images may be determined by converting the translation parameters found in the overlap estimation (step 120) into rotational parameters where:
0 = 0 φ0 = arctan-
Lo
ψ0 = -arctan— ^ -
Equation 8
where (u^.v^ is the best image translation between the pair of images determined in step 120 and z0 is the initial estimate for projection distance.
In general, a solution may be found at each level by optimizing an error function that is minimized when the overlapping portions of the source and the target images are aligned when the source image is remapped by the transformation parameters. One possible error function explained in the Szeliski article is the least squared difference between the intensity data of pixels of the overlapping portions in the remapped source image and the target image:
Figure imgf000015_0001
Equation 9 x2 where is the error function, As and A, are the source and target image intensity functions (interpolating between samples), (us,vs) are the source image coordinates, and (u„vt) are the resulting target image coordinates obtained by remapping the source image coordinates (us,vs) using the estimated transformation parameters. (σ is a weighting that may be set to 1 , and will be omitted in subsequent equations.)
This optimization problem may be solved by methods such as gradient descent, inverting the Hessian matrix, or a combination method like Levenberg-Marquardt, which are discussed in greater detail in references such as PRESS ET A , NUMERICAL RECIPES IN C 540-47 (Cambridge University Press, 1988). These solution methods require partial derivatives of the error metric with respect to the transformation parameters. For some parameter p, the partial is:
Figure imgf000015_0002
Equation 10
<?At <?At άit and ^< are the spatial derivatives of the pixel values of the target image at
positions defined by pixels remapped from the source image. However, estimated transformation parameters will typically not map a source image pixel exactly to a pixel location in the target image, so interpolation may be required both to determine the value at a target pixel and to compute the derivatives. Because interpolation in the target image is done a large number of times (at every mapped source image pixel for each iteration over each pyramid level), the least squared difference determination can be expensive.
An alternative to minimizing the square of the pixel value differences is optical flow, described in references such as HORN & SCHUNK, DETERMINING OPTICAL FLOW: ARTIFICIAL INTELLIGENCE (1981 ). Optical flow treats the source and target images as images of the same scene separated in time and assumes that although the scene moves with respect to the camera between the time of capturing the source image and the target image, the brightness values remain constant. Thus, expanding the equation
A(u, v, t) = A(u + Δu, v + Δv, t + Δt)
as a Taylor series, canceling the zeroth order term A(u,v,t) from both sides, and omitting terms higher than linear yields the optical flow equation:
δk A δk A δk A
— Δu + — Δv + Δt = 0 δa δi ά.
Equation 11
Treating the source image as reflecting the brightness at time t and the target image as reflecting the brightness at time t + 1 , the best transformation parameters are found when the following equation is minimized:
Figure imgf000016_0001
Equation 12
In this formulation, 'u' 'v' ' is ut >vt) projected back into the source image space using the inverse of the current approximation of the transform. While this approach contains the difference in position values of pixels (A, - As) as a subterm and thus is more complex than the least squared difference, in practice this term generally can be approximated with a constant without affecting convergence. Thus, performance is improved because spatial derivatives are required only in the source image, where interpolation is not needed.
This formulation takes the partial derivatives of the error metric with respect to δκ\. δvt the transformation parameters. The major component of these partials is ^ and ^ ,
where p is a transformation parameter. One way to compute these values is to precompute derivative matrices before looping over the pixels of the source image to calculate the error metric. The transformation is: [wtut w,vt wtzt w, ] = [u, vs 0 \]Υ(zs)R(θ, ψ,φ)P(z.)
Equation 13
where the source projection plane has been moved into a translation matrix T, R is the rotation, and P is the target projection. (This equation corresponds to Equation 6 above, where z = 1 , zs is reflected in matrix T, and matrix P represents the projection matrix.)
Taking a partial derivative of this function is a straight-forward calculation. For example, the derivative with respect to θ is:
Figure imgf000017_0001
Equation 14
Thus, the derivative matrix with respect to ■ This can be
Figure imgf000017_0002
computed and saved, along with the derivative matrices for the other parameters at the beginning of each iteration. Then, the product of each source image pixel (us,vs) and the derivative matrix is calculated. To compute M from ° w v and w t dθ ' d θ ' ' d θ '
and similarly compute 7 O_ v f frroomm O w , v , and d w > we use the rule for θ ' d θ d θ
the derivative of a quotient:
«a(wtut) &N, w,ut δ w,ut ru, = δθ δθK. w,
<5(wtvt) δ*. v - AS ch w< δθ ' δθ \ w, ) w,
Equation 15
Computing the derivative matrix with respect to any of the five parameters independently is similar. There is a useful special case when zs and zt are allowed to vary but are constrained to be equal. This occurs when both source and target images are known to be based on photographs taken with the same camera position and lens settings. The derivative matrix for this combined parameter is (using the product rule):
^κ(θ,Ψ,φ)γ(z)+τ z) {θ,Ψ,φ)^
Equation 16
At each pyramid level, the transformation parameters are determined using an iterative optimization method such as those described in Press' NUMERICAL RECIPES IN C, which conclude when a termination condition is met. For example, a termination
2 condition may be met if little or no progress is being made in decreasing χ in subsequent iterations, or if a predetermined limit of time or number of iterations is exceeded. The solution at each layer is used as the initial estimate for the solution at the next higher resolution layer until the solution is found for the highest resolution layer
2 of the pyramids. The value of χ between the images is tracked as the method proceeds, and the covariance matrix of the parameters is available from the singular value decomposition.
5. Output rendering
Referring to Figure 1 , once transformation parameters are determined for the pairs of overlapping source images, the image data from the source images can be reprojected onto an output plane to form the output image (step 150). A prerequisite to the rendering of the output image is to transform all input images (which have, to this point, been related only to one other by a set of pairwise transformations) into a single, common output space. A image may be arbitrarily chosen as a "transform reference image", whose coordinate system is chosen as the output space for this part of the process. Each other image will be positioned relative to this image. Choosing a relative transform for each image to the transform reference image is a process of finding a path via the pairwise relations from the image to the transform reference image. Local transforms from the pairwise relations are accumulated (with matrix multiplication) along the path to produce the complete transform to the space of the transform reference image. A given image can frequently be transformed into the space of the transform reference image through several pathways. Referring to Figure 4, each of images A, B, and C overlap, so there are pairwise relations between A and B, A and C, and B and C. If A is chosen to be the transform reference image, the transform of B into the output space may be determined using either the single path B to A, or with the composite path B to C then C to A. If the all the pairwise transforms contain no error, then different paths between two images should yield the same output space transformation. In the presence of error, however, some paths may be better than others. Rather than selecting between multiple paths arbitrarily, it is helpful to try to determine which path is likely to be the most accurate. A simple way to do this is to constrain the input images to have some easily analyzed topology. For example, if all the images overlap sequentially in the order of input, the pairwise relations between adjacent images can be used.
Another technique that does not depend on prior knowledge of the topology is based on the generation of a minimum spanning tree from the graph of images, where the cost of each link is the negative of some confidence metric of that link. A minimum spanning tree is defined on a connected graph where each edge has a specified cost, and is the acyclic connected subset graph that has the lower cost sum. A discussion and algorithm for generating the minimum spanning tree is in SEDGEWICK, ALGORITHMS, pp. 452-61 (2d ed. 1988). The process removes pairwise relations so that there is exactly one path between any two images. Furthermore, the sum of the confidence metrics of the remaining pairwise relations is at least as high as any other spanning tree.
One confidence metric used is the number of pixels of overlap between each pair of images. Other confidence metrics can be employed including those based on the result of the projective fit.
If all overlap relations are specified, running the minimum spanning tree algorithm eliminates paths that have less aggregate overlap and are thus less likely to generate an accurate result.
Image data reprojection is not limited to output planes, which are theoretically limited to containing less than a 180 degree field of view and are practically limited to about 120 degrees to avoid objectionable perspective effects. Other projections, such as into cylindrical or azimuthal coordinates, introduce distortion but can effectively display fields of view approaching or exceeding 180 degrees. The entire gamut of map projections, which are normally used to display the outer surface of a sphere, can be used. Map projections are described in greater detail in articles such as Peter H. Dana's article, Map Projection Overview, available at http://www.utexas.edu/depts/grg/gcraft/notes/mapproj/mapproj.html.
One possible reprojection is to project the output image onto the interior faces of a cube. An interactive panoramic viewer with no view direction constraints can be created by rendering a cube using those projections as texture maps. This viewing mechanism can be directly accelerated by using hardware designed for three- dimensional applications.
If the overlapping portions of image data in the source images match exactly (for example, in content, brightness, and color), any blending technique may be used to create image data for the output image in the overlapping portions. In general, however, the images will differ noticeably (even when camera and lens settings are identical), which suggests using a blending technique that increases the consistency between pixels of overlapping portions of the images, as well as in the surrounding portions. One such technique uses a weighted average of pixel data from source images as the pixel data for the output image, where the weight accorded to data from a pixel of a particular image is proportional to the distance of that pixel to the edge (in both dimensions) in that image.
Output rendering may be inverse or forward. Inverse rendering evaluates each source image to determine data for a pixel in the output image, determining which source images contribute to a specific output pixel, summing weighted pixel data from appropriate source images, normalizing to a unit weight, and storing the result, then moving to the next output pixel. Forward rendering evaluates each source image just once to determine data for the pixels of the output image by transforming data for each source image pixel and accumulating weighted data and the weights into buffers for the output image. After evaluating all source images, the weighted sums for the output image data are normalized to a unit weight.
If desired, the invention may interface with hardware or software capable of rendering textured polygons to accelerate the rendering process. Each image can be converted into a texture map, and a rectangle enclosing the image extent and transformed with the transformation parameters can be passed to the renderer. 6. Making Transformations Globally Consistent
As noted above, a potential problem in creating final renderings results from the fact that the above transformations are determined only on a pairwise basis. For example, the transformations between overlapping source images A, B, and C are supposed to be invertible ( MAB = M"BA ) and transitive (MABMBC = MAC y However, because the transformation parameters are typically not perfectly estimated, the invertible and transitive properties may not be satisfied. Often the relationship between two images can be determined through more than one path. For example, if image A overlaps image B, image B overlaps image C, and image A overlaps image C, as shown in Figure 4, possible paths from A to C include MAC andMABMB . Selecting just one path can cause poor results when the relationship is not the most direct one. This is particularly evident when the image sequence forms a large loop, as it does when constructing fully circular views. The mapping error builds up between the successive combinations of images in the loop, often with the result that a visible mismatch exists between the last image and the first image.
It is possible to satisfy the invertible and transitive constraints by estimating all mappings simultaneously, but this is very expensive computationally. This would require constructing a composite error function of all pairwise relations, and solving over a large dimensional space (the number of degrees of freedom times the number of pairwise relations).
An alternative is to treat the transformation parameters computed on a local basis as being estimates for the global parameters. When a data set is fit to an analytic model using a chi-squared minimization, the Hessian matrix at the parameter values found by the fit is the inverse of the covariance matrix for the fitted parameters. (See PRESS ET AL., NUMERICAL METHODS IN C pp. 540-47.) The covariance matrix is a characterization of the distribution of the parameter estimate, and thus may be used directly to determine the likelihood that the actual parameter values are within some specified epsilon of the estimated values, assuming a multivariate normal distribution. Expressed slightly differently, the covariance matrix can be used as a measure of the relative confidence in the estimate of each parameter value.
The global parameters are derived by adjusting the estimates to satisfy the global constraints in a way that changes high confidence estimates less than low confidence estimates. Assuming that the distribution is the multivariate normal distribution, the probability distribution is proportional to:
Figure imgf000022_0001
(See PRESS ET AL., NUMERICAL METHODS IN C p. 554.) The global constraints may be satisfied by maximizing this probability, or equivalently (because the negative exponential is monotonically decreasing), by minimizing the term:
D( , 1 ) = ∑ Λ^^ (Pi -Pi ') 1 ( ι -P , )
where the sum is over the image pairs, Pi is the unknown vector of mapping parameters for pair I, Pi' is the locally computed estimate for the mapping parameters for pair I, and^' is the inverse covariance matrix of the parameters for pair I. This second formulation may also be interpreted as minimizing the squared distance from the estimate measured in "units" of standard deviation. While this approach still results in a large dimensional space, it does not require the composite error function of all pairwise relations.
For a single loop of n images, we can compactly express the global constraint as the identity transform:
M 1 >2M 2f3K M n.ltnM n>1 = I Equation 18
Additional constraint terms can be added for additional loops that exist in the source images. Constrained optimization problems of this type can be solved using Lagrangian multipliers, setting the partial derivatives with respect to each parameter and λ of the following expression to zero, and solving the resulting system of equations.
∑(Pi -Pi'fc'vPi -Pi') + Mι,2M2.3K Mn-1,nMπιl -if i
Equation 19
Finding and rendering the images with those parameters distributes the error across all the image pairs.
Other embodiments are within the scope of the following claims. What is claimed is:

Claims

1. A computer-implemented method for combining related source images, each represented by a set of digital data, comprising: determining three-dimensional relationships between data sets representing related source images; and creating a data set representing an output image by combining the data sets representing the source images in accordance with the determined three-dimensional relationships.
2. The method of claim 1 , wherein each of the source images and the output image has a corresponding image space, and determining three-dimensional relationships between source images further comprises determining three-dimensional transformations of the source image spaces to the output image space.
3. The method of claim 2, wherein the output image space is the image space of a source image.
4. The method of claim 2, wherein determining a three-dimensional transformation of a source image space to the target image space further comprises determining parameters describing the three-dimensional transformation.
5. The method of claim 4, wherein the parameters describe a camera orientation, a distance between the camera and the source image, and a distance between the camera and the target image.
6. A memory device storing computer readable instructions for aiding a computer to combine related source images, each represented by a set of digital data, comprising instructions to: determine three-dimensional relationships between data sets representing related source images; and create a data set representing an output image by combining the data sets representing the source images in accordance with the determined three-dimensional relationships.
7. An apparatus to combine related source images, each represented by a set of digital data, comprising: a storage medium to store related source images, each represented by a set of digital data; and a processor operatively coupled to the storage medium and configured to: determine three-dimensional relationships between data sets representing related source images; and create a data set representing an output image by combining the data sets representing the source images in accordance with the determined three- dimensional relationships.
PCT/US1998/011042 1997-05-30 1998-05-29 Combining digital images WO1998054674A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP98923862A EP0983574A1 (en) 1997-05-30 1998-05-29 Combining digital images
AU76053/98A AU7605398A (en) 1997-05-30 1998-05-29 Combining digital images
JP11500991A JP2000512419A (en) 1997-05-30 1998-05-29 Digital image composition

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/865,840 US6249616B1 (en) 1997-05-30 1997-05-30 Combining digital images based on three-dimensional relationships between source image data sets
US08/865,840 1997-05-30

Publications (1)

Publication Number Publication Date
WO1998054674A1 true WO1998054674A1 (en) 1998-12-03

Family

ID=25346351

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1998/011042 WO1998054674A1 (en) 1997-05-30 1998-05-29 Combining digital images

Country Status (5)

Country Link
US (1) US6249616B1 (en)
EP (1) EP0983574A1 (en)
JP (1) JP2000512419A (en)
AU (1) AU7605398A (en)
WO (1) WO1998054674A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000049572A1 (en) * 1999-02-18 2000-08-24 Koninklijke Philips Electronics N.V. Image processing method, system and apparatus for forming an overview image of an elongated scene
WO2001093199A1 (en) * 2000-05-31 2001-12-06 Waehl Marco Method and system for producing spherical panoramas
WO2008076766A1 (en) * 2006-12-13 2008-06-26 Adobe Systems Incorporated Panoramic image straightening

Families Citing this family (113)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3104868B2 (en) * 1997-11-25 2000-10-30 富士ゼロックス株式会社 Image processing device
JP3303128B2 (en) * 1998-04-15 2002-07-15 東京大学長 Image processing method using a plurality of images and camera device for simultaneous imaging of a plurality of images used therefor
US6697107B1 (en) * 1998-07-09 2004-02-24 Eastman Kodak Company Smoothing a digital color image using luminance values
US6856843B1 (en) * 1998-09-09 2005-02-15 Gerber Technology, Inc. Method and apparatus for displaying an image of a sheet material and cutting parts from the sheet material
US6690837B1 (en) * 1998-11-03 2004-02-10 Agfa-Gevaert Screening method for overlapping sub-images
JP3791216B2 (en) * 1998-11-10 2006-06-28 コニカミノルタホールディングス株式会社 Image composition processing apparatus, image composition processing method, and computer readable recording medium recording image composition processing program
JP2000339468A (en) * 1999-05-31 2000-12-08 Minolta Co Ltd Method and device for positioning three-dimensional data
JP3867883B2 (en) * 1999-06-01 2007-01-17 株式会社リコー Image composition processing method, image composition processing apparatus, and recording medium
US6507665B1 (en) * 1999-08-25 2003-01-14 Eastman Kodak Company Method for creating environment map containing information extracted from stereo image pairs
EP1091560A1 (en) 1999-10-05 2001-04-11 Hewlett-Packard Company Method and apparatus for scanning oversized documents
US6671407B1 (en) * 1999-10-19 2003-12-30 Microsoft Corporation System and method for hashing digital images
US6717608B1 (en) * 1999-12-31 2004-04-06 Stmicroelectronics, Inc. Motion estimation for panoramic digital camera
US20010013895A1 (en) * 2000-02-04 2001-08-16 Kiyoharu Aizawa Arbitrarily focused image synthesizing apparatus and multi-image simultaneous capturing camera for use therein
US6628283B1 (en) * 2000-04-12 2003-09-30 Codehorse, Inc. Dynamic montage viewer
US6813391B1 (en) * 2000-07-07 2004-11-02 Microsoft Corp. System and method for exposure compensation
JP4193342B2 (en) * 2000-08-11 2008-12-10 コニカミノルタホールディングス株式会社 3D data generator
US7095905B1 (en) 2000-09-08 2006-08-22 Adobe Systems Incorporated Merging images to form a panoramic image
US6639684B1 (en) 2000-09-13 2003-10-28 Nextengine, Inc. Digitizer using intensity gradient to image features of three-dimensional objects
US8253754B2 (en) * 2001-01-16 2012-08-28 Microsoft Corporation Sampling-efficient mapping of images
SE519884C2 (en) * 2001-02-02 2003-04-22 Scalado Ab Method for zooming and producing a zoomable image
US7020775B2 (en) * 2001-04-24 2006-03-28 Microsoft Corporation Derivation and quantization of robust non-local characteristics for blind watermarking
US7356188B2 (en) * 2001-04-24 2008-04-08 Microsoft Corporation Recognizer of text-based work
US6996273B2 (en) * 2001-04-24 2006-02-07 Microsoft Corporation Robust recognizer of perceptually similar content
US6973574B2 (en) * 2001-04-24 2005-12-06 Microsoft Corp. Recognizer of audio-content in digital signals
US7006707B2 (en) 2001-05-03 2006-02-28 Adobe Systems Incorporated Projecting images onto a surface
US7103236B2 (en) * 2001-08-28 2006-09-05 Adobe Systems Incorporated Methods and apparatus for shifting perspective in a composite image
US20030112339A1 (en) * 2001-12-17 2003-06-19 Eastman Kodak Company Method and system for compositing images with compensation for light falloff
US7428019B2 (en) * 2001-12-26 2008-09-23 Yeda Research And Development Co. Ltd. System and method for increasing space or time resolution in video
US6873439B2 (en) * 2002-03-13 2005-03-29 Hewlett-Packard Development Company, L.P. Variational models for spatially dependent gamut mapping
JP3889650B2 (en) * 2002-03-28 2007-03-07 三洋電機株式会社 Image processing method, image processing apparatus, computer program, and recording medium
US7400782B2 (en) * 2002-08-28 2008-07-15 Arcsoft, Inc. Image warping correction in forming 360 degree panoramic images
DE10304111B4 (en) * 2003-01-31 2011-04-28 Sirona Dental Systems Gmbh Recording method for an image of a recording object
US7119816B2 (en) * 2003-03-31 2006-10-10 Microsoft Corp. System and method for whiteboard scanning to obtain a high resolution image
TW594594B (en) * 2003-05-16 2004-06-21 Ind Tech Res Inst A multilevel texture processing method for mapping multiple images onto 3D models
US20050063608A1 (en) * 2003-09-24 2005-03-24 Ian Clarke System and method for creating a panorama image from a plurality of source images
JP2005123667A (en) * 2003-10-14 2005-05-12 Seiko Epson Corp Generation of still picture data from a plurality of image data
US7831832B2 (en) * 2004-01-06 2010-11-09 Microsoft Corporation Digital goods representation based upon matrix invariances
US20050165690A1 (en) * 2004-01-23 2005-07-28 Microsoft Corporation Watermarking via quantization of rational statistics of regions
US7966563B2 (en) * 2004-03-12 2011-06-21 Vanbree Ken System for organizing and displaying registered images
US9826159B2 (en) 2004-03-25 2017-11-21 Clear Imaging Research, Llc Method and apparatus for implementing a digital graduated filter for an imaging apparatus
US10721405B2 (en) 2004-03-25 2020-07-21 Clear Imaging Research, Llc Method and apparatus for implementing a digital graduated filter for an imaging apparatus
WO2005093654A2 (en) 2004-03-25 2005-10-06 Fatih Ozluturk Method and apparatus to correct digital image blur due to motion of subject or imaging device
US20050228270A1 (en) * 2004-04-02 2005-10-13 Lloyd Charles F Method and system for geometric distortion free tracking of 3-dimensional objects from 2-dimensional measurements
US7711179B2 (en) * 2004-04-21 2010-05-04 Nextengine, Inc. Hand held portable three dimensional scanner
US7770014B2 (en) 2004-04-30 2010-08-03 Microsoft Corporation Randomized signal transforms and their applications
FR2872665A1 (en) 2004-07-01 2006-01-06 Thomson Licensing Sa VIDEO COMPRESSION DEVICE AND METHOD
US20060110071A1 (en) * 2004-10-13 2006-05-25 Ong Sim H Method and system of entropy-based image registration
EP1820159A1 (en) * 2004-11-12 2007-08-22 MOK3, Inc. Method for inter-scene transitions
US20060115182A1 (en) * 2004-11-30 2006-06-01 Yining Deng System and method of intensity correction
US7653264B2 (en) 2005-03-04 2010-01-26 The Regents Of The University Of Michigan Method of determining alignment of images in high dimensional feature space
US8645870B2 (en) 2005-03-31 2014-02-04 Adobe Systems Incorporated Preview cursor for image editing
US20060239579A1 (en) * 2005-04-22 2006-10-26 Ritter Bradford A Non Uniform Blending of Exposure and/or Focus Bracketed Photographic Images
CA2507174C (en) * 2005-05-13 2013-07-16 Semiconductor Insights Inc. Method of registering and aligning multiple images
US7565029B2 (en) * 2005-07-08 2009-07-21 Seiko Epson Corporation Method for determining camera position from two-dimensional images that form a panorama
CN101288102B (en) * 2005-08-01 2013-03-20 拜奥普蒂根公司 Methods and systems for analysis of three dimensional data sets obtained from samples
US7515771B2 (en) 2005-08-19 2009-04-07 Seiko Epson Corporation Method and apparatus for reducing brightness variations in a panorama
CN101636748A (en) * 2005-09-12 2010-01-27 卡洛斯·塔庞 The coupling based on frame and pixel of the graphics images to camera frames for computer vision that model generates
US7660464B1 (en) 2005-12-22 2010-02-09 Adobe Systems Incorporated User interface for high dynamic range merge image selection
US7995834B1 (en) 2006-01-20 2011-08-09 Nextengine, Inc. Multiple laser scanner
US8385687B1 (en) * 2006-12-06 2013-02-26 Matrox Electronic Systems, Ltd. Methods for determining a transformation between images
US7995861B2 (en) 2006-12-13 2011-08-09 Adobe Systems Incorporated Selecting a reference image for images to be joined
US20080253685A1 (en) * 2007-02-23 2008-10-16 Intellivision Technologies Corporation Image and video stitching and viewing method and system
US8368695B2 (en) * 2007-02-08 2013-02-05 Microsoft Corporation Transforming offline maps into interactive online maps
US8200039B2 (en) * 2007-04-05 2012-06-12 Adobe Systems Incorporated Laying out multiple images
CA2605234C (en) * 2007-10-03 2015-05-05 Semiconductor Insights Inc. A method of local tracing of connectivity and schematic representations produced therefrom
US20090153586A1 (en) * 2007-11-07 2009-06-18 Gehua Yang Method and apparatus for viewing panoramic images
TWI361396B (en) * 2008-01-18 2012-04-01 Univ Nat Chiao Tung Image synthesis system for a vehicle and the manufacturing method thereof mage synthesis device and method
US8923648B2 (en) * 2008-01-21 2014-12-30 Denso International America, Inc. Weighted average image blending based on relative pixel position
US7961224B2 (en) * 2008-01-25 2011-06-14 Peter N. Cheimets Photon counting imaging system
JP5583127B2 (en) * 2008-09-25 2014-09-03 コーニンクレッカ フィリップス エヌ ヴェ 3D image data processing
EP2350901B1 (en) * 2008-10-24 2019-09-04 Exxonmobil Upstream Research Company Tracking geologic object and detecting geologic anomalies in exploration seismic data volume
US8321422B1 (en) 2009-04-23 2012-11-27 Google Inc. Fast covariance matrix generation
US8611695B1 (en) * 2009-04-27 2013-12-17 Google Inc. Large scale patch search
US8396325B1 (en) * 2009-04-27 2013-03-12 Google Inc. Image enhancement through discrete patch optimization
US8391634B1 (en) 2009-04-28 2013-03-05 Google Inc. Illumination estimation for images
US8385662B1 (en) 2009-04-30 2013-02-26 Google Inc. Principal component analysis based seed generation for clustering analysis
US9229957B2 (en) * 2009-05-13 2016-01-05 Kwan Sofware Engineering, Inc. Reference objects and/or facial/body recognition
EP2483767B1 (en) 2009-10-01 2019-04-03 Nokia Technologies Oy Method relating to digital images
EP2539759A1 (en) 2010-02-28 2013-01-02 Osterhout Group, Inc. Local advertising content on an interactive head-mounted eyepiece
US8515137B2 (en) 2010-05-03 2013-08-20 Microsoft Corporation Generating a combined image from multiple images
WO2012037290A2 (en) 2010-09-14 2012-03-22 Osterhout Group, Inc. Eyepiece with uniformly illuminated reflective display
US9544498B2 (en) * 2010-09-20 2017-01-10 Mobile Imaging In Sweden Ab Method for forming images
TW201219955A (en) * 2010-11-08 2012-05-16 Hon Hai Prec Ind Co Ltd Image capturing device and method for adjusting a focusing position of an image capturing device
US8798393B2 (en) 2010-12-01 2014-08-05 Google Inc. Removing illumination variation from images
CN103281961A (en) 2010-12-14 2013-09-04 豪洛捷公司 System and method for fusing three dimensional image data from a plurality of different imaging systems for use in diagnostic imaging
WO2012147083A1 (en) * 2011-04-25 2012-11-01 Generic Imaging Ltd. System and method for correction of vignetting effect in multi-camera flat panel x-ray detectors
AU2011224051B2 (en) * 2011-09-14 2014-05-01 Canon Kabushiki Kaisha Determining a depth map from images of a scene
US20140340427A1 (en) * 2012-01-18 2014-11-20 Logos Technologies Llc Method, device, and system for computing a spherical projection image based on two-dimensional images
US8938119B1 (en) 2012-05-01 2015-01-20 Google Inc. Facade illumination removal
US9215440B2 (en) * 2012-10-17 2015-12-15 Disney Enterprises, Inc. Efficient EWA video rendering
US9091628B2 (en) 2012-12-21 2015-07-28 L-3 Communications Security And Detection Systems, Inc. 3D mapping with two orthogonal imaging views
US9036044B1 (en) * 2013-07-22 2015-05-19 Google Inc. Adjusting camera parameters associated with a plurality of images
CN108600576B (en) 2013-08-28 2020-12-29 株式会社理光 Image processing apparatus, method and system, and computer-readable recording medium
US10010387B2 (en) 2014-02-07 2018-07-03 3Shape A/S Detecting tooth shade
GB2524983B (en) * 2014-04-08 2016-03-16 I2O3D Holdings Ltd Method of estimating imaging device parameters
US9360671B1 (en) * 2014-06-09 2016-06-07 Google Inc. Systems and methods for image zoom
US9785818B2 (en) 2014-08-11 2017-10-10 Synaptics Incorporated Systems and methods for image alignment
US9984494B2 (en) * 2015-01-26 2018-05-29 Uber Technologies, Inc. Map-like summary visualization of street-level distance data and panorama data
US9792485B2 (en) 2015-06-30 2017-10-17 Synaptics Incorporated Systems and methods for coarse-to-fine ridge-based biometric image alignment
US10311302B2 (en) * 2015-08-31 2019-06-04 Cape Analytics, Inc. Systems and methods for analyzing remote sensing imagery
US10003783B2 (en) * 2016-02-26 2018-06-19 Infineon Technologies Ag Apparatus for generating a three-dimensional color image and a method for producing a three-dimensional color image
JP2017208619A (en) * 2016-05-16 2017-11-24 株式会社リコー Image processing apparatus, image processing method, program and imaging system
EP3249928A1 (en) * 2016-05-23 2017-11-29 Thomson Licensing Method, apparatus and stream of formatting an immersive video for legacy and immersive rendering devices
WO2017205386A1 (en) 2016-05-27 2017-11-30 Hologic, Inc. Synchronized surface and internal tumor detection
US10127681B2 (en) 2016-06-30 2018-11-13 Synaptics Incorporated Systems and methods for point-based image alignment
US9785819B1 (en) 2016-06-30 2017-10-10 Synaptics Incorporated Systems and methods for biometric image alignment
CN112567727B (en) * 2018-08-20 2023-04-07 索尼半导体解决方案公司 Image processing apparatus and image processing system
CN110753217B (en) * 2019-10-28 2022-03-01 黑芝麻智能科技(上海)有限公司 Color balance method and device, vehicle-mounted equipment and storage medium
US11443442B2 (en) 2020-01-28 2022-09-13 Here Global B.V. Method and apparatus for localizing a data set based upon synthetic image registration
US20220084224A1 (en) * 2020-09-11 2022-03-17 California Institute Of Technology Systems and methods for optical image geometric modeling
WO2022082007A1 (en) 2020-10-15 2022-04-21 Cape Analytics, Inc. Method and system for automated debris detection
WO2023283231A1 (en) 2021-07-06 2023-01-12 Cape Analytics, Inc. System and method for property condition analysis
US11861843B2 (en) 2022-01-19 2024-01-02 Cape Analytics, Inc. System and method for object analysis

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5706416A (en) * 1995-11-13 1998-01-06 Massachusetts Institute Of Technology Method and apparatus for relating and combining multiple images of the same scene or object(s)
WO1998012504A1 (en) * 1996-09-18 1998-03-26 National Research Council Of Canada Mobile system for indoor 3-d mapping and creating virtual environments

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4598369A (en) * 1983-05-02 1986-07-01 Picker International, Inc. Tomography apparatus and method
US4673988A (en) 1985-04-22 1987-06-16 E.I. Du Pont De Nemours And Company Electronic mosaic imaging process
US4797942A (en) 1987-03-02 1989-01-10 General Electric Pyramid processor for building large-area, high-resolution image by parts
US5187754A (en) 1991-04-30 1993-02-16 General Electric Company Forming, with the aid of an overview image, a composite image from a mosaic of images
EP0563737B1 (en) * 1992-03-23 1997-09-10 Canon Kabushiki Kaisha Multilens imaging apparatus with correction of misregistration
GB2271260A (en) * 1992-10-02 1994-04-06 Canon Res Ct Europe Ltd Processing image data
KR940017747A (en) 1992-12-29 1994-07-27 에프. 제이. 스미트 Image processing device
DE69420168T2 (en) * 1993-03-30 2000-04-06 Koninkl Philips Electronics Nv X-ray examination device with an image forming device with several image sensors
EP0986252B1 (en) 1993-06-04 2006-03-08 Sarnoff Corporation System and method for electronic image stabilization
DE69411849T2 (en) 1993-10-20 1999-03-04 Philips Electronics Nv Process for processing luminance levels in a composite image and image processing system using this process
FR2714502A1 (en) 1993-12-29 1995-06-30 Philips Laboratoire Electroniq An image processing method and apparatus for constructing from a source image a target image with perspective change.
US5611000A (en) * 1994-02-22 1997-03-11 Digital Equipment Corporation Spline-based image registration
US5684890A (en) * 1994-02-28 1997-11-04 Nec Corporation Three-dimensional reference image segmenting method and apparatus
US5613013A (en) * 1994-05-13 1997-03-18 Reticula Corporation Glass patterns in image alignment and analysis
US5531520A (en) * 1994-09-01 1996-07-02 Massachusetts Institute Of Technology System and method of registration of three-dimensional data sets including anatomical body data
US5649032A (en) 1994-11-14 1997-07-15 David Sarnoff Research Center, Inc. System for automatically aligning images to form a mosaic image
US5699444A (en) * 1995-03-31 1997-12-16 Synthonics Incorporated Methods and apparatus for using image data to determine camera location and orientation
US5825369A (en) * 1996-01-16 1998-10-20 International Business Machines Corporation Compression of simple geometric models using spanning trees

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5706416A (en) * 1995-11-13 1998-01-06 Massachusetts Institute Of Technology Method and apparatus for relating and combining multiple images of the same scene or object(s)
WO1998012504A1 (en) * 1996-09-18 1998-03-26 National Research Council Of Canada Mobile system for indoor 3-d mapping and creating virtual environments

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HARTLEY R I: "PROJECTIVE RECONSTRUCTION AND INVARANTS FROM MULTIPLE IMAGES", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 16, no. 10, 1 October 1994 (1994-10-01), pages 1036 - 1041, XP000477891 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000049572A1 (en) * 1999-02-18 2000-08-24 Koninklijke Philips Electronics N.V. Image processing method, system and apparatus for forming an overview image of an elongated scene
WO2001093199A1 (en) * 2000-05-31 2001-12-06 Waehl Marco Method and system for producing spherical panoramas
WO2008076766A1 (en) * 2006-12-13 2008-06-26 Adobe Systems Incorporated Panoramic image straightening
US8988466B2 (en) 2006-12-13 2015-03-24 Adobe Systems Incorporated Panoramic image straightening

Also Published As

Publication number Publication date
AU7605398A (en) 1998-12-30
JP2000512419A (en) 2000-09-19
EP0983574A1 (en) 2000-03-08
US6249616B1 (en) 2001-06-19

Similar Documents

Publication Publication Date Title
US6249616B1 (en) Combining digital images based on three-dimensional relationships between source image data sets
US7006709B2 (en) System and method deghosting mosaics using multiperspective plane sweep
US5706416A (en) Method and apparatus for relating and combining multiple images of the same scene or object(s)
US7151801B2 (en) Method and system for enhancing data quality
US6157747A (en) 3-dimensional image rotation method and apparatus for producing image mosaics
US5986668A (en) Deghosting method and apparatus for construction of image mosaics
US5963664A (en) Method and system for image combination using a parallax-based technique
US5987164A (en) Block adjustment method and apparatus for construction of image mosaics
US6009190A (en) Texture map construction method and apparatus for displaying panoramic image mosaics
US7523078B2 (en) Bayesian approach for sensor super-resolution
US6018349A (en) Patch-based alignment method and apparatus for construction of image mosaics
CN110111250B (en) Robust automatic panoramic unmanned aerial vehicle image splicing method and device
US11568516B2 (en) Depth-based image stitching for handling parallax
Brown et al. Image pre-conditioning for out-of-focus projector blur
Ha et al. Panorama mosaic optimization for mobile camera systems
WO1998021690A1 (en) Multi-view image registration with application to mosaicing and lens distortion correction
US20150170405A1 (en) High resolution free-view interpolation of planar structure
CN102289803A (en) Image Processing Apparatus, Image Processing Method, and Program
JP4887376B2 (en) A method for obtaining a dense parallax field in stereo vision
Seibold et al. Model-based motion blur estimation for the improvement of motion tracking
Yu et al. Continuous digital zooming of asymmetric dual camera images using registration and variational image restoration
Jain et al. Panorama construction from multi-view cameras in outdoor scenes
Gevrekci et al. Image acquisition modeling for super-resolution reconstruction
WO2019012647A1 (en) Image processing device, image processing method, and program storage medium
Hu et al. High resolution free-view interpolation of planar structure

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH GM GW HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref country code: JP

Ref document number: 1999 500991

Kind code of ref document: A

Format of ref document f/p: F

WWE Wipo information: entry into national phase

Ref document number: 1998923862

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1998923862

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: CA

WWW Wipo information: withdrawn in national office

Ref document number: 1998923862

Country of ref document: EP