WO2001048695A1 - Optical flow and image forming - Google Patents

Optical flow and image forming Download PDF

Info

Publication number
WO2001048695A1
WO2001048695A1 PCT/FI2000/001119 FI0001119W WO0148695A1 WO 2001048695 A1 WO2001048695 A1 WO 2001048695A1 FI 0001119 W FI0001119 W FI 0001119W WO 0148695 A1 WO0148695 A1 WO 0148695A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion vector
motion
pixel
vector
image
Prior art date
Application number
PCT/FI2000/001119
Other languages
French (fr)
Inventor
Martti Kesäniemi
Original Assignee
Kesaeniemi Martti
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kesaeniemi Martti filed Critical Kesaeniemi Martti
Priority to JP2001548343A priority Critical patent/JP2003518671A/en
Priority to EP00987539A priority patent/EP1252607B1/en
Priority to KR1020027008406A priority patent/KR20020075881A/en
Priority to AU23788/01A priority patent/AU2378801A/en
Priority to US10/168,938 priority patent/US20040022419A1/en
Priority to DE60020887T priority patent/DE60020887T2/en
Priority to AT00987539T priority patent/ATE298117T1/en
Publication of WO2001048695A1 publication Critical patent/WO2001048695A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Definitions

  • the present invention relates generally to the field of digital image processing and, more particularly, to recognizing and analyzing movement in images.
  • the invention also relates to measuring of the optical flow between two images and to forming of new pictures that are intermediate forms of two originals in their features.
  • Optical flow denotes a field formed from motion vectors, connecting the corresponding image points of two pictures.
  • the present invention can also be utilized to enhance image resolution, i.e. the pixel number.
  • Digital visual imaging provides versatile possibilities for processing and compressing pictures and forming new visual images. Owing to the movements of the object, there usually are differences between two successive images. Digital visual imaging can also be used for motion recognition.
  • the application area of visual imaging is extensive: a wide variety of computer vision applications, compression of video images, clinical section images, enhancement of resolution, etc.
  • One of the most usual functions of visual imaging is the representation of motion information between two pictures (usually taken in successive moments). For example, if the first picture depicts a dog wagging its tail, in the second picture the position of the dog's tail is different. The aim is to determine the motion information included in the movement of the tail. This is done by defining the optical flow between the pictures, that is, the motion vector field that connects the corresponding points in the two images (a and b, Figure 1 ). When the optical flow has been defined, it can be used to form a new picture - a so-called intermediate picture - between the two originals.
  • the gradient-based methods aim at forming a motion vector field between two images, connecting the images in such a way that the square sum of the differences in value (e.g. differences in luminosity) of the combined image points is minimized (minimising of cross- correlation taking account of motion).
  • this condition is not sufficient to uniquely determine the flow generated: the smoothness of the flow, i.e. the requirement that the distance between the points in the first picture must be approximately the same in the second picture as well, is used as an additional condition.
  • the smoothness criterion for the flow will present problems at so- called occlusion points, where the object present in a picture moves to be located on top of another object in the picture ( Figure 2, reference o).
  • the optical flow is not smooth. It has been attempted to avoid the problems caused thereby by relinquishing the smoothness criterion at points that can be presumed to be the edge points of an object represented in an image. For example rapid changes in the gray scale of the images and the properties of the flow defined can be used in classifying the points as likely edge points.
  • the prior art methods must calculate all the intermediate pictures separately, which will increase the requisite computation and the time required for the process, depending on the number of intermediate images required.
  • the aim of the present invention is to diminish these disadvantages of the prior art.
  • the object of the invention is achieved in the way set forth in the claims.
  • the idea of the invention is to produce a flow that uniquely determines the points in the intermediate images corresponding to the points on the surfaces of the original images, in other words, each point in an inter- mediate image has one corresponding point in both of the original pictures.
  • the flow is determined in such a way that al! of the new pictures to be formed in between the original pictures can be formed by means of the same optical flow.
  • Each motion vector of the flow has a permitted area of movement defined by the motion vectors surrounding said motion vector, within the bounds of which its direction can change when the flow is formed. Hence, the area of movement is applied as an additional constraint in the present invention, adherence to which ensures the uniqueness of the flow.
  • An unique flow can be realized by adjusting the proportion of the cross-correlation taking account of movement and of the smoothness criterion when the flow is formed, in such a way that in the vicinity of violation of the additional constraints, the proportion of cross-correlation taking account of movement is suppressed. This should preferably be done in such a way that the proportion of cross- correlation is zeroed when the additional constraints are violated.
  • the method enables, among other things, a more efficient way than heretofore of compressing an image set.
  • the invention enables the use of a rapidly convergent and computationally light algorithm for determining the flow, which makes it advantageous to utilize the invention also in applications to which the production of new images does not directly relate.
  • the invention enables, for instance, enlargement of an original image, so that the enlarged image is more agreeable to the human eye than an image enlarged by the prior art methods.
  • Fig. 1 shows an example of an optical flow between two images
  • Fig. 2 shows an example of occlusion points of an optical flow
  • Fig. 3 illustrates the determining of an individual motion vector of the flow in accordance with the invention and the shift of the motion vector
  • Fig. 4 depicts an example of the search of a motion vector passing through an image point in a new image in accordance with the invention
  • Fig. 5 illustrates in flow chart form the search of the value of an image point in a new image
  • Fig. 6 illustrates the change in the weighting factor of the cross- correlation portion in the permitted area of movement of the motion vector
  • Fig. 7 illustrates the permitted area of an individual motion vector on the image plane when the constraining factor is a single neighbouring motion vector
  • Fig. 8 illustrates the permitted area of an individual motion vector on the image plane when the constraining factor is constituted by four neighbouring motion vectors
  • Fig. 9 illustrates the permitted areas of an individual motion vector on both image planes
  • Fig. 10 depicts in flow chart form an example of forming an optical flow in accordance with the invention
  • Fig. 11 depicts in flow chart form an example of forming a new image on a new image plane
  • Fig. 12 shows by way of example enlarging of an aliased image in by prior art methods and by the method of the invention
  • Fig. 13 shows an example of motion vectors to be formed between two pixel lines/columns
  • Fig. 14a depicts in flow chart form an example of forming a new enlarged image in accordance with the invention
  • Fig. 14b represents a continuation of Fig. 14a
  • Fig. 15 illustrates image enlargement in accordance with the invention when the pixel lines and columns are processed simultaneously.
  • Determining an optical flow ( Figure 1 ) between two images can be represented as an optimization task of two variables, wherein the interrelation of the variables of the function to be minimized must be selected. (There may also be a larger quantity of variables depending on the application.)
  • the square sum of the value differences of the image points and the smoothness of the flow are normally employed as- variables in the gradient-based methods. The aim is thus to minimize the difference between the values (for example luminosity values) of the image points, i.e. pixels, connected by the flow on the one hand, and the smoothness of the flow, i.e. the requirement that points adjacent in the first image are mapped adjacent to one another in the second image as well, on the other hand.
  • the vector field is fixed between the images in such a way that the fulcrum k of each vector is midway between images a and b.
  • f a ( g, (x+u(x, y), y+v(x, y)) - g 2 (x-u(x, y), y-v(x, y))) 2 du(x,y) , 2 , ,du(x,y) . 2 dv(x,y) , 2 , ,dv(x,y)
  • the functional to be minimized is composed of the smoothness criterion, i.e. the sum of the partial derivatives of the flow, u x + u y 2 + v 2 + v y 2 , on the one hand, and of the cross-correlation criterion taking account of motion, ( g., (x+u, y+v) - g 2 (x-u, y-v)) 2 , on the other hand.
  • the interrelation of the criteria is determined by the parameter a.
  • d df n d df dx du x u * C ⁇ - dry.
  • c duxc 2 u w ( 1 "5a ) d df d df
  • V z u 0 , (1 -6a)
  • the Laplace function can be approximated for example with terms V 2 u ⁇ u - u, and V 2 v «v - v , wherein
  • u (x, y) 1/4 (u(x,y- 1 ) + u(x-1 ,y) + u(x+ ,y) + u(x, y+ ⁇ ) (1-7)
  • uolin + ⁇ u n + ⁇ [( u - u n ) - a ( g, (x+u n , y+v n ) - g 2 (x-u n , y-v n ))
  • the values, i.e. colours (or the gray scale, for example) of the image points in the images between the known images can be determined.
  • the values of the image points in the image halfway between the original images are obtained on the basis of the motion vectors and the intersections of the images.
  • the motion vector passing through this image point must first be determined, whereafter the value of the desired point is interpolated from the points of the original images.
  • Said vector can be sought for instance by means of the following iterative algorithm ( Figures 4 and 5): selecting the motion vector LO whose fixing point (y,0) is on the same line orthogonal to the images as the point to be determined (x, z) (step 51 ); determining the distance vector eO between this motion vector and the point to be determined on the plane having the direction of the images (step 52); selecting as the new motion vector L1 the vector whose fixing point (y1 ,0) is at the distance indicated by the distance vector determined in the preceding step from the fixing point of the previous motion vector (y,0) in the reverse direction (-eO) (step 54); repeating these steps until the value of the distance vector is considered to be small enough (step 53).
  • the motion vector field can be made by interpolation to be continuous, that is, the motion vectors between the fixing points are formed by interpolating from the values of the motion vectors passing through the surrounding fixing points. Hence, motion vectors passing through the original fixing points or motion vectors interpolated between them can be used to form the new image.
  • equation (2-8) can be written into the form
  • the lengths of the vectors can be expressed in a variety of ways, depending on what mathematical method of calculation is used. Preferred methods include the use of the 1-norm or infinite-norm, for example.
  • the length of the vector is the sum of the lengths of the vector components,
  • , and in the infinite-norm the length of the vector is the length of the greatest vector component, 111 max ⁇ I u
  • equation (2-1 ) places additional constraints on the flow to be sought (equation 2-13), according to which the absolute value of the difference between the shifts of two neighbouring motion vectors must be less than the distance between the fixing points of the same neighbouring motion vectors. Since the additional constraints pertain to the partial derivatives of the flow, which the smoothness criterion seeks to minimize, satisfying the constraints can be guaranteed by reducing the proportion of the term representing cross-correlation in the upgrading.
  • the constraints of equation (2-13) can be realized by replacing parameter a in equation (1-8) for example with a parameter obtained from equation 3-1 :
  • a(u x , u y , v x , v y ) ⁇ - max ⁇ 0, 1 - max ⁇ ⁇ u x ⁇ + ⁇ v x ⁇ , ⁇ u y ⁇ + ⁇ v y ⁇ /d ⁇ ,
  • nix, y) ⁇ max ⁇
  • ⁇ v y (x, y) ⁇ max ⁇ v(x, y+1 ) - v(x, y)
  • Equation (3-1 ) What is essential in equation (3-1 ) is that the values of a diminish when the partial derivatives increase, and that the proportion of the cross- correlation, taking account of motion, in the upgrading term vanishes when constraints (2-13) are broken (note Figure 2).
  • equation (1-8) can now be written into form
  • v n+1 v relief + 7 D ( v ⁇ n ) - ⁇ a (u x , u y , v x , v y ) ( g, (x+u n , y+v n ) - g 2
  • ⁇ D is the unit parameter of the smoothness criterion and ⁇ is the unit parameter of the cross-correlation taking account of motion.
  • Figure 6 illustrates the change of parameter a within the permitted area of movement MA of the motion vector.
  • the value of parameter a be c when the motion vector meets the image plane orthogonally, in which situation the motion vector is located in the middle of the area of movement.
  • the value of parameter a decreases, as will be seen from the figure.
  • the decrease can be linear, as is shown in the figure, or non-linear, depending on the application.
  • Figure 7 illustrates the permitted intersecting area of the motion vector fixed to point k with the image plane when the constraining factor is the motion vector fixed to a neighbouring fixing point A.
  • this motion vector A according to the fixing point.
  • A we assume A to be known, and hence by shifting the motion vector fixed to point A to point k (let us call the shifted motion vector A', denoted with a broken line in the figure), the constraining influence of A on the permitted shifting area of the motion vector fixed to point k is found.
  • the permitted shifting area S (denoted with a hatched line in the figure) is determined in such a way that the distance of the intersection of the image and the motion vector fixed to point k from the intersection of the image and motion vector A' shall be less than the distance between points A and k.
  • the image plane is different from the plane of the fixing point. In other words, if the plane of the fixing point is shifted along the motion vectors onto the image plane and if all motion vectors have the same direction (in this case, the direction of A), a permitted shifting area S is obtained for the motion vector being studied that passes through point k when the constraining factor is one known motion vector A.
  • the permitted area has the shape of a square, at the angles of which the motion vectors passing through the neighbouring fixing points are located.
  • the square denoted with a broken line illustrates a situation where motion vector A has no shift but the motion vector meets the image plane orthogonally.
  • the shape of the area of movement need not necessarily be a square, but the edges of the square may be curved, or the area may even possibly be a circle. At their maximum, however, the boundaries of the area of movement must pass through the neighbouring points that are used in the calculation.
  • Figure 8 illustrates the influence of the motion vectors of the four closest neighbouring fixing points (A, B, C and D) on the motion vector passing through point k.
  • the permitted area S on the image plane is denoted with a hatched line.
  • Figure 9 illustrates the permitted areas of intersection of the motion vector on the image planes of the original images.
  • the images to be processed are introduced into the flow determining system.
  • the size of the sampling matrix i.e. the pixel number into which the images are digitalized, is selected.
  • the value of each pixel defined by the sampling matrix is measured from the images ( Figure 10, step 91 ).
  • the value can be for example a colour or a grayish hue. What variable is used can be selected in accordance with the situation.
  • the digitalized images are low-pass filtered for example by convolution with the Gaussian function (the filtering can be implemented in some other manner as well, or omitted completely, even though it is preferable).
  • the Gaussian function is separable, wherefore the convolution can first be calculated in direction x and thereafter in direction y, which is more rapid than calculating a bidimensional convolution.
  • the size of the filtering function employed is dependent on the number of motion vectors (i.e. on the distance between the motion vectors).
  • the greatest permitted value for unit parameter ⁇ x - by means of which the speed of convergence is influenced when the desired optical flow is sought - is determined (step 93).
  • the fixing plane and the fixing points therein up to the midpoint of the images are determined (step 94).
  • the fixing plane may also be at another point than midway, but is most advantageous to use a plane formed in the middle. It is to be noted that if the first original image serves as the fixing plane, it suffices to calculate the derivatives from the second image only.
  • the number of fixing points is the same as the number of motion vectors determined in connection with the filtering. The fixing of the motion vectors can also be performed at an earlier step.
  • the pixel values, the derivatives and the motion vector- shift for the images at the intersections of the motion vector and the images are sought, k+t(x) for the first image and k-t(x) for the second image (step 95).
  • the derivatives are obtained by calculating the difference between the values of two neighbouring image points.
  • the initial direction of the motion vector may be directly against the image planes, but if earlier information on the direction of the motion vectors is available (for example a motion vector field calculated chronologically by means of the previous image pair), this can be made use of.
  • the Laplace function of the representation is calculated (step 96) by calculating the difference between the motion vector shift t(x) and the mean of the shifts of the surrounding motion vectors.
  • the value of the Laplace function indicates the non-smoothness of the motion vector field, i.e., inversely it indicates the smoothness.
  • the additional constraint (equation 3-1 ) - i.e. the differences between the motion vector shift t(x) and the shifts of the surrounding motion vectors - is calculated, and the greatest of these is selected (step 97).
  • d) Thereafter, a new shift t(x) is upgraded for the motion vector located at point k (x, y) (step 98) by substituting the quantities calculated above in equations (3-3), in accordance with which the direction of an individual motion vector is upgraded as the additional constraint calculated at step c) changes parameter a.
  • the result is stored in memory if the positions of all lines are upgraded simultaneously (Jakobian iteration), or alternatively the position of the motion vectors is immediately changed (Gauss-Seidel iteration).
  • steps a)-d) are repeated 10-15 times or until the change in the representation is considered to be sufficiently small (step 99).
  • the number of motion vectors can be increased, i.e., the resolution of the optical flow can be enhanced (steps 910 and 911 ).
  • the initial shifts of the new motion vectors to be added are obtained by interpolating between the shifts of the previous motion vectors.
  • the process is restarted by selecting the number of motion vectors (distance from one another) and low-pass filtering. Such processing is continued until the desired vector density has been achieved, that is, an optical flow has been formed between the images.
  • a new image (or new images) are calculated by means of the flow and the original images using the method represented by equation (2-1 ) ( Figures 4 and 5).
  • Another alternative method for forming a new image/new images is to interpolate the value of an image point for each discrete motion vector of the optical flow to a desired plane z: in other words, the value of a motion vector point on plane z is obtained by interpolating from the values of the intersections of the first image and motion vector and the second image and motion vector.
  • a motion vector point value is calculated with the desired value of z ( Figure 11 ).
  • the invention can also be made use of in enlarging aliased images.
  • Aliased images in this context refer to images in which the sampling frequency is snail compared to the frequencies contained in the images.
  • the advantage in accordance with Figure 12 is achieved compared to conventional interpolation, in other words, less shadow images will be formed.
  • object A has been enlarged in the orthogonal direction by factor two.
  • Object B represents the result obtained by conventional interpolating methods, and object C represents the interpolation result obtained by means of the optical flow.
  • object C represents the interpolation result obtained by means of the optical flow.
  • n+ ⁇ u ⁇ + r ((u - u n ) - a(u n ) ( g, (x+u n ) - g 2 (x-u n ))
  • Figure 13 shows an example of motion vectors formed between two image point lines/columns Q ⁇ and g 2 .
  • the images to be processed are introduced into the flow determining system.
  • the size of the sampling matrix i.e. the pixel number into which the images are digitalized, is selected, unless the images are already rendered in digital form.
  • the value of each pixel is measured from the images ( Figure 131 ).
  • the value can be for example a colour or a grayish hue. What value is used can be selected in accordance with the situation.
  • the number of motion vectors is selected, and the two first pixel lines are low-pass filtered for example by convolution with the Gaussian function (step 132).
  • g max maxt ⁇ ,, g 2 ⁇
  • the fixing plane and the fixing points therein are determined up to the midway of the pixel lines (step 134).
  • the number of fixing points is the same as the number of motion vectors determined in connection with the filtering.
  • the fixing of the motion sectors can also be performed in an earlier step.
  • Each motion vector (fixed to each fixing point) is submitted to the following procedure: a) The values of the image points, the motion vector shift and the derivatives at the intersections of the motion vector and the pixel line are sought, x+u(x) in the first line and x-u(x) in the second line (step 135). b) The Laplace function, i.e. the smoothness of the representation is calculated for example by calculating the difference between the shift of the motion vector u(x) and the mean of the shifts of the motion vectors flanking it (step 136). c) The additional constraint is calculated (step 137), that is, e.g.
  • Step 138 The quantities are substituted in equation (4-1 ), and thus the additional constraint changes parameter a and the motion vector shift is upgraded (step 138). Steps a)-d) are performed for each motion vector, and they are repeated 10-15 times, or until the change in the representation is considered to be sufficiently small (step 139).
  • step 1310 and 1311 the process is restarted with a new low-pass filtering.
  • the initial shifts of the new lines to be added are obtained by interpolating between the shifts of the previous lines.
  • the process described above is repeated until the desired line density has been achieved.
  • Each pixel line pair in the image is processed in the manner described above, whereafter the method proceeds to processing pixel columns (steps 1314 and 1315). Also the pixel column pairs in the figure are similarly processed.
  • the desired new pixel lines and columns are calculated by means of the flow and the original pix e ls by either of the methods described above (equation 4-5, Figure 4 or Figure 11 ).
  • the new pixel lines and columns can also be calculated immediately subsequent to the determination of the flow, and thus the storing of the flow is replaced by forming new pixel lines/columns.
  • a bidimensional flow matrix is formed from the motion vectors to be calculated e.g. in accordance with the attached Figure 15.
  • the fixing points of the vectors corresponding to the lines are denoted with R and the fixing points of the vectors calculated on the basis of the columns are denoted with S.
  • the fixing points are interconnected with elucidatory lines in the figure.
  • the empty points are points in the original image. In this connection, a rectangular set of pixel co-ordinates has been examined, but the pixel rows could just as well be at an angle of 45°, between which rows the new pixels are formed.
  • the invention affords a fairly short calculation time on account of the fact that there is no need to calculate each intermediate image separately.
  • the invention also enables considerable changes between images on account of the suppression of the motion- minimizing term as the constraints are broken at the occlusion points.
  • the method is very robust on account of automatic adjustment (change of weighting factor).
  • the invention has very wide application.
  • the motion information of the optical flow can be used for example for interpolating or extrapolating new images, improving the quality of an image sequence, and speed measurement by means of a camera.
  • the determination of corresponding image points also has bearing upon forming three-dimensional images on the basis of stereo image pairs.
  • Motion recognition based on images is used in several applications of computer vision.
  • Motion information can also be used for altering images in an image set and for reducing, i.e. compressing, the information required for transmitting an image set. Since by means of this method an unlimited number of new images can be formed using one flow, image compression is more effective than in the prior art methods.
  • the method can also be used for enhancing image resolution and producing high-quality still frames from video images.
  • the invention has been described in the above mainly by means of examples of an image to be formed between two original images and enlargement of an aliased image, the invention can be implemented in other embodiments within the scope of the inventive idea; for example more than two images can be used for determining the flow and forming a new image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Holo Graphy (AREA)
  • Computer And Data Communications (AREA)
  • Non-Silver Salt Photosensitive Materials And Non-Silver Salt Photography (AREA)
  • Optical Record Carriers And Manufacture Thereof (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Road Signs Or Road Markings (AREA)

Abstract

The idea of the invention is to produce an optical flow uniquely determining points of correspondence for points between original images on the surface of the original images. The flow is determined in such a way that all of the new images to be formed between the original images can be formed by means of the same optical flow. Each motion vector of the flow has a permitted area of movement defined by the motion vectors surrounding said motion vector, within the bounds of which its direction can move when the flow is formed. The area of movement is used as an additional constraint the satisfying of which will ensure the uniqueness of the flow. A unique flow is realized by adjusting the proportion of cross-correlation taking account of motion and of the smoothness criterion when the flow is formed, in such a way that in the vicinity of violation of the additional constraints, the proportion of cross-correlation taking account of motion is suppressed. This should preferably be done in such a way that the proportion of the cross-correlation is zeroed when the additional constraints are violated.

Description

Optical flow and image forming
Field of the Invention
The present invention relates generally to the field of digital image processing and, more particularly, to recognizing and analyzing movement in images. The invention also relates to measuring of the optical flow between two images and to forming of new pictures that are intermediate forms of two originals in their features. Optical flow denotes a field formed from motion vectors, connecting the corresponding image points of two pictures. The present invention can also be utilized to enhance image resolution, i.e. the pixel number.
Background of the Invention
Digital visual imaging provides versatile possibilities for processing and compressing pictures and forming new visual images. Owing to the movements of the object, there usually are differences between two successive images. Digital visual imaging can also be used for motion recognition. The application area of visual imaging is extensive: a wide variety of computer vision applications, compression of video images, clinical section images, enhancement of resolution, etc.
One of the most usual functions of visual imaging is the representation of motion information between two pictures (usually taken in successive moments). For example, if the first picture depicts a dog wagging its tail, in the second picture the position of the dog's tail is different. The aim is to determine the motion information included in the movement of the tail. This is done by defining the optical flow between the pictures, that is, the motion vector field that connects the corresponding points in the two images (a and b, Figure 1 ). When the optical flow has been defined, it can be used to form a new picture - a so-called intermediate picture - between the two originals.
There are many prior art methods for defining optical flow, for example gradient-based methods and motion estimation over a hierarchy of resolutions. Other known methods include block-based methods and methods matching objects in images (U.S. Patent 5,726,713 and U.S. Patent 4,796,087). These methods are not, however, capable of yielding a similar high-resolution flow as the gradient-based methods. The so-called feature- based methods, founded on recognizing and connecting the corresponding features between two images, are in a category by itself. Yet it is difficult to develop any general feature classification, wherefore the feasibility of feature-based methods is largely dependent on the picture material to be processed.
The gradient-based methods (U.S. Patent 5,241 ,68) aim at forming a motion vector field between two images, connecting the images in such a way that the square sum of the differences in value (e.g. differences in luminosity) of the combined image points is minimized (minimising of cross- correlation taking account of motion). However, this condition is not sufficient to uniquely determine the flow generated: the smoothness of the flow, i.e. the requirement that the distance between the points in the first picture must be approximately the same in the second picture as well, is used as an additional condition. The smoothness criterion for the flow will present problems at so- called occlusion points, where the object present in a picture moves to be located on top of another object in the picture (Figure 2, reference o). At these points, the optical flow is not smooth. It has been attempted to avoid the problems caused thereby by relinquishing the smoothness criterion at points that can be presumed to be the edge points of an object represented in an image. For example rapid changes in the gray scale of the images and the properties of the flow defined can be used in classifying the points as likely edge points.
Owing to the non-smoothness of the optical flow, the prior art methods must calculate all the intermediate pictures separately, which will increase the requisite computation and the time required for the process, depending on the number of intermediate images required. The aim of the present invention is to diminish these disadvantages of the prior art.
Summary of the Invention
The object of the invention is achieved in the way set forth in the claims. The idea of the invention is to produce a flow that uniquely determines the points in the intermediate images corresponding to the points on the surfaces of the original images, in other words, each point in an inter- mediate image has one corresponding point in both of the original pictures. The flow is determined in such a way that al! of the new pictures to be formed in between the original pictures can be formed by means of the same optical flow. Each motion vector of the flow has a permitted area of movement defined by the motion vectors surrounding said motion vector, within the bounds of which its direction can change when the flow is formed. Hence, the area of movement is applied as an additional constraint in the present invention, adherence to which ensures the uniqueness of the flow. Since the image information often includes areas (such as occlusion points) infringing the additional constraint, ensuring the uniqueness of these areas will warrant the uniqueness of the entire flow. An unique flow can be realized by adjusting the proportion of the cross-correlation taking account of movement and of the smoothness criterion when the flow is formed, in such a way that in the vicinity of violation of the additional constraints, the proportion of cross-correlation taking account of movement is suppressed. This should preferably be done in such a way that the proportion of cross- correlation is zeroed when the additional constraints are violated.
Since in the present method an unlimited number of new images can be formed by means of one flow, the method enables, among other things, a more efficient way than heretofore of compressing an image set. The invention enables the use of a rapidly convergent and computationally light algorithm for determining the flow, which makes it advantageous to utilize the invention also in applications to which the production of new images does not directly relate. Furthermore, the invention enables, for instance, enlargement of an original image, so that the enlarged image is more agreeable to the human eye than an image enlarged by the prior art methods.
List of Drawings
In the following, the invention will be described in detail with reference to the examples of Figures 1-15, set forth in the accompanying drawings in which
Fig. 1 shows an example of an optical flow between two images, Fig. 2 shows an example of occlusion points of an optical flow, Fig. 3 illustrates the determining of an individual motion vector of the flow in accordance with the invention and the shift of the motion vector, Fig. 4 depicts an example of the search of a motion vector passing through an image point in a new image in accordance with the invention, Fig. 5 illustrates in flow chart form the search of the value of an image point in a new image,
Fig. 6 illustrates the change in the weighting factor of the cross- correlation portion in the permitted area of movement of the motion vector, Fig. 7 illustrates the permitted area of an individual motion vector on the image plane when the constraining factor is a single neighbouring motion vector,
Fig. 8 illustrates the permitted area of an individual motion vector on the image plane when the constraining factor is constituted by four neighbouring motion vectors,
Fig. 9 illustrates the permitted areas of an individual motion vector on both image planes,
Fig. 10 depicts in flow chart form an example of forming an optical flow in accordance with the invention, Fig. 11 depicts in flow chart form an example of forming a new image on a new image plane,
Fig. 12 shows by way of example enlarging of an aliased image in by prior art methods and by the method of the invention,
Fig. 13 shows an example of motion vectors to be formed between two pixel lines/columns,
Fig. 14a depicts in flow chart form an example of forming a new enlarged image in accordance with the invention,
Fig. 14b represents a continuation of Fig. 14a,
Fig. 15 illustrates image enlargement in accordance with the invention when the pixel lines and columns are processed simultaneously.
Detailed Description of the Invention
In the following, the mathematical backgroi-nd of the invention will be described. Many of the formulae to be set forth are previously known, wherefore also other equations corresponding to them will be found in the literature in the field - for example for calculating the cost function and iteration. Yet mathematical consideration is necessary for understanding and describing the invention. The vectors are denoted with bold lettering in the present study. In this context, the use of two images to determine an optical flow and a new image are studied, but it is also possible to use a larger number of images, in which case the noise portions appearing in the images can be eliminated.
Determining an optical flow (Figure 1 ) between two images can be represented as an optimization task of two variables, wherein the interrelation of the variables of the function to be minimized must be selected. (There may also be a larger quantity of variables depending on the application.) As stated previously, the square sum of the value differences of the image points and the smoothness of the flow are normally employed as- variables in the gradient-based methods. The aim is thus to minimize the difference between the values (for example luminosity values) of the image points, i.e. pixels, connected by the flow on the one hand, and the smoothness of the flow, i.e. the requirement that points adjacent in the first image are mapped adjacent to one another in the second image as well, on the other hand. Let a vector field composed of individual motion vectors L (Figure
3) be formed between the images. The vector field is fixed between the images in such a way that the fulcrum k of each vector is midway between images a and b. The motion vector fixed at point k = (x, y) intersects the image surfaces at points k + t = (x+u, y+v)] and k-t = (x-u, y-v), where t is the shift of the motion vector from the position orthogonal to the surface of the images.
Let us consider the vector field as continuous, in which case the equation to be minimized will be
J =
Figure imgf000006_0001
y), v(x, y), ux, uy, vx, vy) dx dy, (1 -1 )
where the functional within the integral equation will be
f = a ( g, (x+u(x, y), y+v(x, y)) - g2 (x-u(x, y), y-v(x, y))) 2 du(x,y) ,2 , ,du(x,y) .2 dv(x,y) ,2 , ,dv(x,y)
+ ( dx -f + dy -y +c- dx r +c- dy -f a ( g, (x+u, y+v) - g2 (x-u, y-v)f + u 2 + uy 2 + v 2 + vy 2 ,
(1 -2) where g is the value of the pixel of the first image, g2 is the value of the pixel of the second image and ux, uy, vx, vy are partial derivatives in relation to the lower index.
The functional to be minimized is composed of the smoothness criterion, i.e. the sum of the partial derivatives of the flow, ux + uy 2 + v 2 + vy 2, on the one hand, and of the cross-correlation criterion taking account of motion, ( g., (x+u, y+v) - g2 (x-u, y-v))2, on the other hand. The interrelation of the criteria is determined by the parameter a.
The minimum of the functional J can be sought by using the calculus of variations, by setting the variations of J relative to u and v as zero. With this constraint, the following equations are satisfied:
df d df
- d df -0 (1-3a) δu dx dux dy duy
df d df
- d Sf = =00 (1-3b) dv dx dvx dy dv
The first terms in equations (1-3a) and (1-3b) can be written into the following form:
— = 2α ( g, (x+u, y+v) - g2 (x-u, y-v)) du
^(x+u.y + v) | dg2(x-u,y-v)) dx dx
— = 2α ( g, (x+u, y+v) - g2 (x-u, y-v)) dv dg^x+ u.y + v) | dg2(x-u,y-v) dy dy
The latter terms in equations (1-3) can be expressed as:
d df n d df dx dux = u *Cι - dry. c duxc = 2 uw (1 "5a) d df d df
= 2 v„ = 2 v dx dv„ dy dvv yy (1 -5b)
By substituting (1-4) and (1-5) in equations (1-3), we obtain:
a ( g, (x+u, y+v) - g2(x-u, y-v)) (- - (x+u, y+v) + - - (x-u,y-v)) dx dx
Vzu = 0 , (1 -6a)
a ( g, (x+u, y+v) - g2 (x-u, y-v)) (-^ (x+u, y+v) + -^ (x-u, y-v)) dy dy
- V2v = 0 , (1-6b)
where the Laplace function of the flow V2u = uxx + un and V2v = vxx + vπ.
Depending on what mathematical method of calculation is used, the Laplace function can be approximated for example with terms V2u ~ u - u, and V2v «v - v , wherein
u (x, y) = 1/4 (u(x,y- 1 ) + u(x-1 ,y) + u(x+ ,y) + u(x, y+Λ ) (1-7)
where u(x+i, y+i) are the calculation points surrounding point (x, y). The same also holds true for v . The number of calculation points in this study is 4, but it may also be e.g. 8. In Figure 7, the four adjacent points surrounding point k being considered, said adjacent points being used for the calculation, are connected by a broken line. If eight surrounding calculation points were used in the calculation, each point surrounding point k which is being considered would influence the value of u and v . The flow can thus be determined by minimizing equations (1-6) by means of the following iterative process:
u„+ι = un + γ [( u - un) - a ( g, (x+un, y+vn) - g2 (x-un, y-vn))
(^ (x+un, y+vn) + ^2- (x-un, y-vn))] (1-8a) dx dx Vn+1 = Vn + 7 [( V →n) - « ( 01 (X+"n, Y+ ~ 92{X~Un, y~Vn))
( ^- ± ((xx++uunn,, yy++vvnn)) ++ ^ - - (x-un, y-vn))], (1-8b) dy dy
where / is the relaxation parameter used in the iteration, determining the proportion of the upgrading term in the new motion vector value.
Once the flow has been determined in accordance with equations 1-8, the values, i.e. colours (or the gray scale, for example) of the image points in the images between the known images can be determined. The values of the image points in the image halfway between the original images are obtained on the basis of the motion vectors and the intersections of the images. When it is desired to define the value of a point between the original images that is not situated midway between the images, the motion vector passing through this image point must first be determined, whereafter the value of the desired point is interpolated from the points of the original images. Said vector can be sought for instance by means of the following iterative algorithm (Figures 4 and 5): selecting the motion vector LO whose fixing point (y,0) is on the same line orthogonal to the images as the point to be determined (x, z) (step 51 ); determining the distance vector eO between this motion vector and the point to be determined on the plane having the direction of the images (step 52); selecting as the new motion vector L1 the vector whose fixing point (y1 ,0) is at the distance indicated by the distance vector determined in the preceding step from the fixing point of the previous motion vector (y,0) in the reverse direction (-eO) (step 54); repeating these steps until the value of the distance vector is considered to be small enough (step 53). Finally, the value of the new point on the image plane is sought on the basis of the motion vector selected (step 55). The motion vector field can be made by interpolation to be continuous, that is, the motion vectors between the fixing points are formed by interpolating from the values of the motion vectors passing through the surrounding fixing points. Hence, motion vectors passing through the original fixing points or motion vectors interpolated between them can be used to form the new image.
By denoting the co-ordinates of the point to be determined as (x, z) and the co-ordinates of the fixing point of the motion vector passing through this point as (y, 0), the former can be expressed as: y„ = x-c[t(y^)-z], (2-1)
where y„ is the new and yπ1 the previous candidate for the fixing point of the motion vector, and t = [u, v\. The coefficient c is a changeable upgrading coefficient dependent on the earlier upgradings. Let us examine the convergence of the algorithm (2-1) (presuming that c=1): let us study on what conditions to be imposed on the vector field shift the algorithm will yield the situation:
yn + t(yn)-z = x (2-2)
that is, when the following holds true:
lim (yn + t(y„)-z-x) = 0, (2-3) n— >∞
where -1 < z < 1. The initial error is
e0 = x - (y0 + t(y0) z ) (2-4)
After the first iteration
y,= x-t(y0)-z, (2-5)
thus giving an error of
e, = x - (y, + t(y,) • z) = x - ((x - t(y0) • z) + t(x - t(y0) -z)-z) = t(y0) • z - t(x - t(y0) z).z= z • ( t(y0) - t(x - t(y0) • z )) (2-6)
Likewise, the errors at rounds n and n+1 will be obtained as
e„ = x - (y„ + t(y„) z ) = x - t(yπ) z - yn en+1 = x - (yn+1 + t(yπ+) z ) = - z • ( t(x - t(yn) • z) - t(y ) (2-7)
The ratio of the length of the error vectors eπ and en+1 obtained is
Figure imgf000011_0001
By substituting a ≡ x - t(y„) • z and b ≡ y„, equation (2-8) can be written into the form
e„ \t a -t b e \a-tiι
Equation (2-9) reaches its minimum and maximum on the surface of the images (z = ±1). In order for the algorithm to be convergent with all values -1 <z< 1,
\t(a)-t(bi
1 l ' κ A < 1. (2-10) a-ti
must hold true. The lengths of the vectors can be expressed in a variety of ways, depending on what mathematical method of calculation is used. Preferred methods include the use of the 1-norm or infinite-norm, for example. In the 1- norm, the length of the vector is the sum of the lengths of the vector components, |t| = |u| + |v|, and in the infinite-norm the length of the vector is the length of the greatest vector component, 111 =max{ I u | , | v | }. By using the 1-norm, where a=(a.,, a2 ) and b=(b.,, b2 ), the left-hand side of equation (2-10) can be written into the form
|t(a)-t(b) _
|a-b| |u(a1,a2)-ι/(b1,a1)|+|y(a1,a2)-y(b1,a2)||aι-b1|
|a1-b1| |a~b|
+ |"(aya2)-"(a.b2^+|y(a1,a2)-v(a1,b2^|a2-b2| ja2-b2j !a-b| The right-hand side of equation (2-11) corresponds to the weighted sum of the difference quotients of vector field t: thus it still holds true that
Figure imgf000012_0001
(2-12) where 0 < r < 1. In order for equation (2-12) to be satisfied with all values of r,
< 1 (2-13a)
< 1 , (2-13b)
Figure imgf000012_0002
must hold true, where t(x, y) = [u(x, y), v(x, y)].
Hence, the algorithm described in equation (2-1 ) places additional constraints on the flow to be sought (equation 2-13), according to which the absolute value of the difference between the shifts of two neighbouring motion vectors must be less than the distance between the fixing points of the same neighbouring motion vectors. Since the additional constraints pertain to the partial derivatives of the flow, which the smoothness criterion seeks to minimize, satisfying the constraints can be guaranteed by reducing the proportion of the term representing cross-correlation in the upgrading. The constraints of equation (2-13) can be realized by replacing parameter a in equation (1-8) for example with a parameter obtained from equation 3-1 :
a(ux, uy, vx, vy) = α - max{0, 1 - max { \ux\ + \vx\, \uy\ + \vy\}/d},
(3-1 )
where d is the distance between the fixing points.
The absolute values of the partial derivatives |ux|, \uy\, \vx\ and |vy| of equation (3-1) can be approximated for instance by means of equations:
| nix, y)\ = max {|u(x+1 , y) - u(x, y)\, \u(x, y) - ϋ(x-1 , y)|}
|vx(x, y)\ = max {| v(x+1 , y) - v(x, y)\, \v(x, y) - v(x -1 , y)|} \uy(x, y)\ = max {\u(x, y+1 ) - u(x, y)\, \u(x, y) - u(x. y-1 )|}
\vy(x, y)\ = max {\v(x, y+1 ) - v(x, y)|, \v(x, y) - v(x, y -1)|} (3-2)
wherein the differences between the shift of the motion vector and the shifts of its closest neighbours are calculated. The greatest difference of shifts indicates the smallest distance of the motion vector from the boundaries of- the area of movement. Hence, the maximum possible difference of the shifts equals the distance between the fixing points. There are also other ways of approximating the derivative, such as calculating the difference of the shifts of the neighbouring motion vectors to the motion vector being studied: |v(x, y+1 ) - v(x, y -1 )|, where the shift of the actual motion vector being studied is not used to determine the derivative. What is essential in equation (3-1 ) is that the values of a diminish when the partial derivatives increase, and that the proportion of the cross- correlation, taking account of motion, in the upgrading term vanishes when constraints (2-13) are broken (note Figure 2). In other words, when the flow is being iterated, for each motion vector its shortest distance to the limit at which the additional constraints are broken is calculated, and in accordance therewith the weighting of the term taking account of motion is upgraded. Equations (1-8) can now be written into form
u„+ι = un + ηD (u - un) - ηx a (ux, uy, vx, vy) ( g (x+uπ, y+vπ) - g2 (x-un, y-vn))( (^i (x+un, y+vn) + ^ (x-un, y-vn)) (3-3a) dx dx
vn+1 = v„ + 7D ( v →n) - η a (ux, uy, vx, vy) ( g, (x+un, y+vn) - g2
(x-un, y-vMi^1 (x+ n, y+vn) + ^2- (x-un, y-vn)), (3-3b) dy dy
where ηD is the unit parameter of the smoothness criterion and η is the unit parameter of the cross-correlation taking account of motion.
Figure 6 illustrates the change of parameter a within the permitted area of movement MA of the motion vector. Let the value of parameter a be c when the motion vector meets the image plane orthogonally, in which situation the motion vector is located in the middle of the area of movement. When the shift of the motion vector increases and it approaches the boundaries of the area of movement, the value of parameter a decreases, as will be seen from the figure. The decrease can be linear, as is shown in the figure, or non-linear, depending on the application.
Let us study the additional constraints by means of Figures 7-9. Figure 7 illustrates the permitted intersecting area of the motion vector fixed to point k with the image plane when the constraining factor is the motion vector fixed to a neighbouring fixing point A. Let us denote also this motion vector A, according to the fixing point. Let us assume A to be known, and hence by shifting the motion vector fixed to point A to point k (let us call the shifted motion vector A', denoted with a broken line in the figure), the constraining influence of A on the permitted shifting area of the motion vector fixed to point k is found. The permitted shifting area S (denoted with a hatched line in the figure) is determined in such a way that the distance of the intersection of the image and the motion vector fixed to point k from the intersection of the image and motion vector A' shall be less than the distance between points A and k. When studying the figure, it is to be borne in mind that the image plane is different from the plane of the fixing point. In other words, if the plane of the fixing point is shifted along the motion vectors onto the image plane and if all motion vectors have the same direction (in this case, the direction of A), a permitted shifting area S is obtained for the motion vector being studied that passes through point k when the constraining factor is one known motion vector A. Since four surrounding fixing points have been used in the calculation, the permitted area has the shape of a square, at the angles of which the motion vectors passing through the neighbouring fixing points are located. The square denoted with a broken line illustrates a situation where motion vector A has no shift but the motion vector meets the image plane orthogonally. The shape of the area of movement need not necessarily be a square, but the edges of the square may be curved, or the area may even possibly be a circle. At their maximum, however, the boundaries of the area of movement must pass through the neighbouring points that are used in the calculation.
Figure 8 illustrates the influence of the motion vectors of the four closest neighbouring fixing points (A, B, C and D) on the motion vector passing through point k. The permitted area S on the image plane is denoted with a hatched line. Figure 9 illustrates the permitted areas of intersection of the motion vector on the image planes of the original images.
First Embodiment of the Invention The following presents, by way of example, the implementation of the invention to a situation where the optical flow between two known images is sought (Figure 10) and new images between said images are formed by means of the flow and the original images (Figures 4 and 11 ).
First, the images to be processed are introduced into the flow determining system. At this stage, the size of the sampling matrix, i.e. the pixel number into which the images are digitalized, is selected. As a next step, the value of each pixel defined by the sampling matrix is measured from the images (Figure 10, step 91 ). The value can be for example a colour or a grayish hue. What variable is used can be selected in accordance with the situation.
Since it is preferred to select the number of points on the fixing plane (i.e. the number of motion vectors) to be smaller than the pixel number (step 92), the digitalized images are low-pass filtered for example by convolution with the Gaussian function (the filtering can be implemented in some other manner as well, or omitted completely, even though it is preferable). The Gaussian function is separable, wherefore the convolution can first be calculated in direction x and thereafter in direction y, which is more rapid than calculating a bidimensional convolution. Hence, the size of the filtering function employed is dependent on the number of motion vectors (i.e. on the distance between the motion vectors). In connection with the filtering, the derivatives and the desired pixel values are stored in memory from the surface of both images, for instance the greatest derivatives in the direction of each sur ce coordinate (g'max = max{g'1ι1, g'2 1, g'1ι2. g ) and tne greatest and smallest pixel values between the images (gmιn = minfg.,, g2,}, gmax = max{g.,, g2,},). Using these values, the greatest permitted value for unit parameter ηx - by means of which the speed of convergence is influenced when the desired optical flow is sought - is determined (step 93).
After the image processing described above, the fixing plane and the fixing points therein up to the midpoint of the images are determined (step 94). The fixing plane may also be at another point than midway, but is most advantageous to use a plane formed in the middle. It is to be noted that if the first original image serves as the fixing plane, it suffices to calculate the derivatives from the second image only. The number of fixing points is the same as the number of motion vectors determined in connection with the filtering. The fixing of the motion vectors can also be performed at an earlier step.
Thereafter, for each motion vector located at each fixing point (k=
(*. y)Y- a) The pixel values, the derivatives and the motion vector- shift for the images at the intersections of the motion vector and the images are sought, k+t(x) for the first image and k-t(x) for the second image (step 95). The derivatives are obtained by calculating the difference between the values of two neighbouring image points. The initial direction of the motion vector may be directly against the image planes, but if earlier information on the direction of the motion vectors is available (for example a motion vector field calculated chronologically by means of the previous image pair), this can be made use of. b) The Laplace function of the representation is calculated (step 96) by calculating the difference between the motion vector shift t(x) and the mean of the shifts of the surrounding motion vectors. The value of the Laplace function indicates the non-smoothness of the motion vector field, i.e., inversely it indicates the smoothness. c) The additional constraint (equation 3-1 ) - i.e. the differences between the motion vector shift t(x) and the shifts of the surrounding motion vectors - is calculated, and the greatest of these is selected (step 97). d) Thereafter, a new shift t(x) is upgraded for the motion vector located at point k = (x, y) (step 98) by substituting the quantities calculated above in equations (3-3), in accordance with which the direction of an individual motion vector is upgraded as the additional constraint calculated at step c) changes parameter a. The result is stored in memory if the positions of all lines are upgraded simultaneously (Jakobian iteration), or alternatively the position of the motion vectors is immediately changed (Gauss-Seidel iteration).
Thereafter, steps a)-d) are repeated 10-15 times or until the change in the representation is considered to be sufficiently small (step 99). Once the optical flow containing the desired number of motion vectors has thus be en achieved, the number of motion vectors can be increased, i.e., the resolution of the optical flow can be enhanced (steps 910 and 911 ). The initial shifts of the new motion vectors to be added are obtained by interpolating between the shifts of the previous motion vectors. The process is restarted by selecting the number of motion vectors (distance from one another) and low-pass filtering. Such processing is continued until the desired vector density has been achieved, that is, an optical flow has been formed between the images. A new image (or new images) are calculated by means of the flow and the original images using the method represented by equation (2-1 ) (Figures 4 and 5). Another alternative method for forming a new image/new images is to interpolate the value of an image point for each discrete motion vector of the optical flow to a desired plane z: in other words, the value of a motion vector point on plane z is obtained by interpolating from the values of the intersections of the first image and motion vector and the second image and motion vector. Thus, in this method a motion vector point value is calculated with the desired value of z (Figure 11 ).
Even though the determining of the flow and the new image has been studied in this example utilizing two original images, it is possible to utilize more than two images for determining the optical flow and the new image. In such a situation, the calculation and the method must naturally be modified from those presented above.
Second Embodiment of the Invention
The invention can also be made use of in enlarging aliased images. Aliased images in this context refer to images in which the sampling frequency is snail compared to the frequencies contained in the images. When aliased images are enlarged, the advantage in accordance with Figure 12 is achieved compared to conventional interpolation, in other words, less shadow images will be formed.
In Figure 12, object A has been enlarged in the orthogonal direction by factor two. Object B represents the result obtained by conventional interpolating methods, and object C represents the interpolation result obtained by means of the optical flow. When aliased images are enlarged, the new image is sought similarly as in the interpolation between two images: herein, however, the new image is sought between two adjacent lines/columns of image points - not between two images, but within the image, so to speak. The equations set forth previously will herein be rendered the following form:
"n+ι = uπ + r((u - un) - a(un) ( g, (x+un) - g2 (x-un))
A (x+Un) + ^- (x-un)) (4-1) dx dx and a(uj = ob - max{0,1 - |uj}} , (4-2) wherein
|ux| = \ux(x)\ = max {|u(x+1 ) - u(x)\, |ι/(x-1 ) - u(x)\}/d,
(4-3) where d is the distance between the fixing points of the motion vectors on the fixing plane, x is the fixing point of the motion vector, and u is the motion vector shift from the orthogonal position.
Also some other equation than 4-3, approximating the derivative at point x being examined, can be used, an example being
|uj = \ux(x)\ = {|u(x+1 ) - |u(x-1 )\/d, (4-4)
where, therefore, the approximation of the derivative is sought from the difference between the shifts of the neighbouring motion vectors and not from the difference between the shift of the motion vector of the point being examined and the shifts of the neighbouring motion vector. Figure 13 shows an example of motion vectors formed between two image point lines/columns Qι and g2.
Furthermore, the following algorithm can now be used for calculating new pixels:
yn = x - u y^) z, (4-5)
where z is the distance of the image plane to be calculated from the fixing point plane of the motion vectors. The following is an example, with reference to Figure 14a, of implementing the method to a situation where a single image is known, the optical flow of each adjacent pixel line and column in the image is sought and new pixel lines and columns are calculated by means of the flow and the original pixel lines and columns.
First, the images to be processed are introduced into the flow determining system. At this stage, the size of the sampling matrix, i.e. the pixel number into which the images are digitalized, is selected, unless the images are already rendered in digital form. As a next step, the value of each pixel is measured from the images (Figure 131 ). The value can be for example a colour or a grayish hue. What value is used can be selected in accordance with the situation.
The number of motion vectors is selected, and the two first pixel lines are low-pass filtered for example by convolution with the Gaussian function (step 132). In connection with the filtering, the greatest derivatives in both directions of the image plane (g'max = max{gf 1, g 2 1, g 2, g22 }) and the greatest and smallest value differences between the images (for example gray scale values gmιn =
Figure imgf000019_0001
g2), gmax = maxtø,, g2}) are stored in memory. The derivatives are obtained by calculating the difference between the grayish hue of two adjacent image points. Parameter c^ is set in such a way that the greatest possible upgrading shift of the lines will be as great as possible but will not violate the constraint, e.g. c^ = d I (g'max • (gmax - gmιn))(step 133).
After the image processing described above, the fixing plane and the fixing points therein are determined up to the midway of the pixel lines (step 134). The number of fixing points is the same as the number of motion vectors determined in connection with the filtering. The fixing of the motion sectors can also be performed in an earlier step.
Each motion vector (fixed to each fixing point) is submitted to the following procedure: a) The values of the image points, the motion vector shift and the derivatives at the intersections of the motion vector and the pixel line are sought, x+u(x) in the first line and x-u(x) in the second line (step 135). b) The Laplace function, i.e. the smoothness of the representation is calculated for example by calculating the difference between the shift of the motion vector u(x) and the mean of the shifts of the motion vectors flanking it (step 136). c) The additional constraint is calculated (step 137), that is, e.g. the differences between the shift of the motion vector u(x) and the shifts of the adjacent motion vectors are determined and the greatest of these is selected. d) The quantities are substituted in equation (4-1 ), and thus the additional constraint changes parameter a and the motion vector shift is upgraded (step 138). Steps a)-d) are performed for each motion vector, and they are repeated 10-15 times, or until the change in the representation is considered to be sufficiently small (step 139).
Thereafter, the number of motion vectors is increased if desired
(steps 1310 and 1311 ) and the process is restarted with a new low-pass filtering. The initial shifts of the new lines to be added are obtained by interpolating between the shifts of the previous lines. The process described above is repeated until the desired line density has been achieved.
Once the flow between the two first pixel lines has been formed and stored, the next pixel line pair is proceeded to (Figure 14b, steps 1312 and 1313). The same operations are performed on the new pixel line pair as on the first line pair, except that the previous second line now becomes the first line, which is why the filtering must be repeated for the latter line only.
Each pixel line pair in the image is processed in the manner described above, whereafter the method proceeds to processing pixel columns (steps 1314 and 1315). Also the pixel column pairs in the figure are similarly processed.
The desired new pixel lines and columns are calculated by means of the flow and the original pixels by either of the methods described above (equation 4-5, Figure 4 or Figure 11 ). To save memory space, the new pixel lines and columns can also be calculated immediately subsequent to the determination of the flow, and thus the storing of the flow is replaced by forming new pixel lines/columns.
The flow required to enhance the resolution can also be formed by calculating the pixel lines and columns simultaneously: A bidimensional flow matrix is formed from the motion vectors to be calculated e.g. in accordance with the attached Figure 15. The fixing points of the vectors corresponding to the lines are denoted with R and the fixing points of the vectors calculated on the basis of the columns are denoted with S. The fixing points are interconnected with elucidatory lines in the figure. The empty points are points in the original image. In this connection, a rectangular set of pixel co-ordinates has been examined, but the pixel rows could just as well be at an angle of 45°, between which rows the new pixels are formed. In this context, it is also possible to use more than two pixel rows/columns for determining the flow and the new image. In comparison with the prior art, the invention affords a fairly short calculation time on account of the fact that there is no need to calculate each intermediate image separately. The invention also enables considerable changes between images on account of the suppression of the motion- minimizing term as the constraints are broken at the occlusion points. Moreover, the method is very robust on account of automatic adjustment (change of weighting factor).
The invention has very wide application. The motion information of the optical flow can be used for example for interpolating or extrapolating new images, improving the quality of an image sequence, and speed measurement by means of a camera. The determination of corresponding image points also has bearing upon forming three-dimensional images on the basis of stereo image pairs. Motion recognition based on images is used in several applications of computer vision. Motion information can also be used for altering images in an image set and for reducing, i.e. compressing, the information required for transmitting an image set. Since by means of this method an unlimited number of new images can be formed using one flow, image compression is more effective than in the prior art methods.
The fact that realization of the constraints is attended to enables the use of a rapidly convergent and computationally light algorithm for determining the flow, which makes it advantageous to implement the invention also in applications to which the production of new images does not directly relate.
The method can also be used for enhancing image resolution and producing high-quality still frames from video images. Even though the invention has been described in the above mainly by means of examples of an image to be formed between two original images and enlargement of an aliased image, the invention can be implemented in other embodiments within the scope of the inventive idea; for example more than two images can be used for determining the flow and forming a new image.

Claims

Claims
1. A method for forming a motion vector field composed of motion vectors between at least two images in connection with image processing, in which method the motion vector field is formed by minimizing it in relation to at least two criteria, the first criterion representing the motion information between the images and the second criterion the smoothness of the motion vector field, c h a r a c t e r i z e d in that additional constraints, determining motion vector-specifically the limit values of the directional change permitted to the motion vector, are used in forming the motion vector field, and that in forming the motion vector field the weighting of the first criterion is adjusted motion vector-specifically to be smaller when the limit values are approached, and thus on account of the adjustment the proportion of the first criterion is suppressed and the proportion of the second criterion increases when the motion vector field is formed by minimization.
2. A method as claimed in claim 1 , c h a r a c t e r i z e d in that when the motion vector reaches the limit values of the permitted directional change, the weighting of the first criterion is zeroed.
3. A method as claimed in claim 1 , c h a r a c t e r i z e d in that the weighting of the first criterion is proportional to the smallest distance of the motion vector from the limit values, according to which the absolute value of the difference between the directional change of two adjacent motion vectors, wherein the directional change corresponds to a deviation from the direction orthogonal to the image plane, said directional change being expressed as a shift vector having the direction of the image plane, shall be smaller than the distance between the fixing points of the same motion vectors, wherein the fixing points define the location points through which the individual motion vectors shall pass.
4. A method as claimed in claim 1 , in which method: - the number of motion vectors and their distance from one another are selected,
- the value and derivatives of each pixel are determined,
- a unit parameter is set for iterating the motion vector field,
- for each motion vector, the values and derivatives of the intersections thereof and the image planes are sought,
- the smoothness of the motion vector field is calculated, - for each motion vector, the magnitude of the change of its shift vector having the direction of the image plane, the shift vector corresponding to the deviation of the motion vector from the direction orthogonal to the image plane, is determined, - a motion vector field is iterated, so that the change of the shift vectors of the motion vectors is sufficiently small,
- the number of motion vectors is increased,
- the above steps are repeated until the number of motion vectors is sufficient, characterized in that the motion vectors of the motion vector field are fixed to a plane in the middle of the images, said fixing defining for each motion vector a point through which that motion vector must pass, and limit values are sought for the shift vector of each motion vector, and said limit values are used for iterating the motion vector field.
5. A method as claimed in any one of claims 1 to 3, characterized in that the motion vectors of the motion vector field are fixed to fixing points that are located on a plane in the middle of the images.
6. A method as claimed in any one of claims 1 to 4, characterized in that the motion vectors of the motion vector field are fixed to fixing points that are located on the plane of one of the images.
7. A method as claimed in any one of claims 1 to 4, characterized in that the motion vectors of the motion vector field are fixed to fixing points that are located on a plane between the images.
8. A method as claimed in claim 3 or claim 4, characterized in that the limit value of the directional change of each motion vector wherein the directional change is expressed as a shift vector having the direction of the image plane, said shift vector simultaneously defining th motion vector point on the image plane, is obtained by determining the differences between the shift vector of each motion vector and the shift vectors of the surrounding motion vectors, and selecting of these the one having the greatest absolute value.
9. A method as claimed in claim 3 or claim 4, characterized in that the limit value of the directional change of each motion vector, wherein the directional change is expressed as a shift vector having the direction of the image plane, said shift vector simultaneously defining the motion vector point on the image plane, is obtained by determining the differences between the shift vectors of the motion vectors surrounding each motion vector, and selecting of these the one having the greatest absolute value.
10. A method as claimed in any one of the preceding claims, wherein a new image is formed from original images by means of a motion vector field, characterized in that for each predetermined pixel of the new image a motion vector passing through said pixel is first sought, and a value is interpolated for each pixel of the new image from the values of the pixels of the original images, said values being obtained from the intersections of the selected motion vector and the original images.
11. A method as claimed in any one of claims 1 to 9, wherein a new image is formed from original images by means of a motion vector field, characterized in that a value is interpolated for each intersection of the motion vector and the desired new image plane, said intersection defining the location of the pixel in the new image, from the values of those pixels of the original images which are obtained from the intersections of each motion vector and the original images.
12. A method as claimed in claim 10, characterized in that the motion vector passing through a pixel of the new image to be determined is sought by initially selecting a motion vector, defining the distance vector between the motion vector being sought and the pixel to be determined on a plane having the direction of the images, selecting as the new motion vector a vector whose fixing point is at the distance determined by the distance vector defined above but in the reverse direction from the fixing point of the motion vector previously selected, and repeating the selection of a new motion vector until the value of the distance vector is considered to be sufficiently small, the motion vector last selected being the motion vector sought.
13. A method as claimed in claim 1 or claim 4, charac- t e r i z e d in that when an odd number of original images is used, the motion vectors are fixed to the plane of the original image in the middle.
14. A method for forming a motion vector field between pixel lines/columns of an image in connection with image processing, characterized in that in the method, the motion vector field is solved by minimizing it in relation to at least two criteria, the first criterion representing the motion information between the pixel lines/columns and the second criterion the smoothness of the motion vector field, additional constraints determining motion vector-specifically the limit values of the directional change permitted to the motion vector being used in forming the motion vector field, and that when a solution to the motion vector field is sought the weighting of the first criterion is adjusted motion vector- specifically to be smaller when the limit values are approached, wherein on account of the adjustment the proportion of the first criterion is suppressed and the proportion of the second criterion increases when the motion vector field is formed by minimization.
15. A method as claimed in claim 14, c h a r a c t e r i z e d in that when the motion vector reaches the limit values of the permitted directional change, the weighting of the first criterion is zeroed.
16. A method as claimed in claim 14, c h a r a c t e r i z e d in that the weighting of the first criterion is proportional to the smallest distance of the motion vector from the limit values, according to which the absolute value of the difference between the directional change of two adjacent motion vectors, wherein the directional change corresponds to the deviation from the direction orthogonal to a pixel line/column, said directional change being expressed as a shift vector having the direction of a pixel line/column, shall be smaller than the distance between the fixing points of the same motion vectors, wherein the fixing points define the location points through which the individual motion vectors must pass.
17. A method as claimed in any one of claims 14 to16, c h a r a c t e r i z e d in that in the method: -the number of motion vectors and their distance from one another is selected,
- the value and derivatives of each pixel are determined,
- a unit parameter is set for iterating the motion vector field,
- for each motion vector, the values and derivatives of the intersections thereof and the pixel lines/columns are sought,
- the smoothness of the motion vector field is calculated,
- for each motion vector, the magnitude of the change of its shift vector having the direction of its pixel line/column, the shift vector corresponding to the deviation of the motion vector from the direction orthogonal to the pixel line/column, is determined, - a motion vector field is iterated using the limit values of the shift vectors, so that the change of the shift vectors of the motion vectors is sufficiently small,
- the number of motion vectors is increased, - the above steps are repeated until the number of motion vectors between pixel lines/columns is sufficient,
- the above steps are repeated for each pixel line/column pair.
18. A method as claimed in any one of claims 14 to 17, characterized in that the motion vectors of the motion vector field are fixed to a straight line in the middle of the pixel lines/columns.
19. A method as claimed in any one of claims 14 to 17, characterized in that the motion vectors of the motion vector field are fixed to a straight line between the pixel lines/columns.
20. A method as claimed in any one of claims 14 to 17, characterized in that the motion vectors of the motion vector field are fixed to a pixel line/column.
21. A method as claimed in claim 16 or claim 17, characterized in that the limit value of the directional change of each motion vector, wherein the directional change is expressed as a shift vector having the direction of the pixel line/column, said shift vector simultaneously defining the motion vector point in the new pixel line/column, is obtained by determining the differences between the shift vector of each motion vector and the shift vectors of the adjacent motion vectors, and by selecting from these the one having the greatest absolute value.
22. A method as claimed in claim 16 or claim 17, characterized in that the limit value of the directional change of each motion vector, wherein the directional change is expressed as a shift vector having the direction of the pixel iine'column, said shift vector simultaneously defining the motion vector point in the new pixel line/column, is obtained by determining the difference between the shift vectors of the motion vectors flanking each motion vector.
23. A method as claimed in any one of claims 14 to 22, wherein the pixel number in an image is changed by means of a motion vector field, characterized in that for each predetermined pixel of a new pixel line/column in the image to be changed, a motion vector passing through said pixel is selected, and for each pixel of the image to be changed, a value is interpolated from the values of the pixels of the original images, said values being obtained from the intersection of the selected motion vector and the pixel lines/columns of the original image.
24. A method as claimed in any one of claims 14 to 22, wherein the pixel number in an image is changed by means of a motion vector field, characterized in that for each intersection of a motion vector and a straight line of the desired new pixel line/column, said intersection defining the location of a new pixel, a value is interpolated from the values of the pixels of the original images, said values being obtained from the intersection of each motion vector and the pixel lines/columns of the original image.
25. A method as claimed in claim 23, characterized in that the motion vector passing through a pixel being determined in the new pixel line/column in the changed image is sought by first selecting a motion vector, determining the distance vector between the motion vector being sought and the pixel to be determined on a straight line having the direction of the pixel line/column, selecting as the new motion vector a vector whose fixing point is at the distance determined by the distance vector defined above but having a reverse direction from the fixing point of the motion vector previously selected, and repeating the selection of a new motion vector until the value of the distance vector is considered to be sufficiently small, the motion vector selected last being the motion vector sought.
26. A method as claimed in any one of claims 14 to 25, characterized in that more than two original pixel lines/columns are used.
27. A method as claimed in claim 26, characterized in that when an odd number of original pixel lines/columns is used, the motion vectors are fixed to the straight line of the middlemost of the original pixel lines/columns.
PCT/FI2000/001119 1999-12-28 2000-12-20 Optical flow and image forming WO2001048695A1 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
JP2001548343A JP2003518671A (en) 1999-12-28 2000-12-20 Optical flow and image formation
EP00987539A EP1252607B1 (en) 1999-12-28 2000-12-20 Optical flow and image forming
KR1020027008406A KR20020075881A (en) 1999-12-28 2000-12-20 Optical flow and image forming
AU23788/01A AU2378801A (en) 1999-12-28 2000-12-20 Optical flow and image forming
US10/168,938 US20040022419A1 (en) 1999-12-28 2000-12-20 Optical flow and image forming
DE60020887T DE60020887T2 (en) 1999-12-28 2000-12-20 OPTICAL FLOW AND PICTURE CONSTRUCTION
AT00987539T ATE298117T1 (en) 1999-12-28 2000-12-20 OPTICAL FLOW AND IMAGE COMPOSITION

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FI992799A FI108900B (en) 1999-12-28 1999-12-28 Optical Flow and Image Creation
FI19992799 1999-12-28

Publications (1)

Publication Number Publication Date
WO2001048695A1 true WO2001048695A1 (en) 2001-07-05

Family

ID=8555823

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2000/001119 WO2001048695A1 (en) 1999-12-28 2000-12-20 Optical flow and image forming

Country Status (10)

Country Link
US (1) US20040022419A1 (en)
EP (1) EP1252607B1 (en)
JP (1) JP2003518671A (en)
KR (1) KR20020075881A (en)
CN (1) CN1415105A (en)
AT (1) ATE298117T1 (en)
AU (1) AU2378801A (en)
DE (1) DE60020887T2 (en)
FI (1) FI108900B (en)
WO (1) WO2001048695A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1308897A1 (en) * 2001-11-01 2003-05-07 Oplayo Oy Image interpolation
WO2008009981A1 (en) * 2006-07-21 2008-01-24 Snell & Wilcox Limited Picture attribute allocation
CN109300143A (en) * 2018-09-07 2019-02-01 百度在线网络技术(北京)有限公司 Determination method, apparatus, equipment, storage medium and the vehicle of motion vector field

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7043058B2 (en) * 2001-04-20 2006-05-09 Avid Technology, Inc. Correcting motion vector maps for image processing
KR100924401B1 (en) * 2001-06-05 2009-10-29 소니 가부시끼 가이샤 Image processor
KR100498951B1 (en) * 2003-01-02 2005-07-04 삼성전자주식회사 Method of Motion Estimation for Video Coding in MPEG-4/H.263 Standards
JP4419062B2 (en) 2004-03-29 2010-02-24 ソニー株式会社 Image processing apparatus and method, recording medium, and program
US20050285947A1 (en) * 2004-06-21 2005-12-29 Grindstaff Gene A Real-time stabilization
US20060020562A1 (en) * 2004-07-21 2006-01-26 University Of Southern Mississippi Apparatus and method for estimating optical flow
TWI288353B (en) * 2004-12-24 2007-10-11 Lite On Semiconductor Corp Motion detection method
US20070280507A1 (en) * 2006-06-01 2007-12-06 Beddhu Murali Apparatus and Upwind Methods for Optical Flow Velocity Estimation
US8593506B2 (en) * 2007-03-15 2013-11-26 Yissum Research Development Company Of The Hebrew University Of Jerusalem Method and system for forming a panoramic image of a scene having minimal aspect distortion
JP4600530B2 (en) * 2008-06-17 2010-12-15 ソニー株式会社 Image processing apparatus, image processing method, and program
FR2944152B1 (en) 2009-04-03 2011-06-17 Peugeot Citroen Automobiles Sa DEVICE FOR CONTROLLING THE DROP IN THE RESIDUAL VOLTAGE AT THE TERMINALS OF A FUEL CELL AFTER ITS STOP
WO2010125481A1 (en) * 2009-04-29 2010-11-04 Koninklijke Philips Electronics, N.V. Real-time depth estimation from monocular endoscope images
JP5863273B2 (en) * 2011-05-02 2016-02-16 キヤノン株式会社 Image processing apparatus, control method, and program
US9288484B1 (en) 2012-08-30 2016-03-15 Google Inc. Sparse coding dictionary priming
US9300906B2 (en) * 2013-03-29 2016-03-29 Google Inc. Pull frame interpolation
JP6104152B2 (en) * 2013-12-26 2017-03-29 本田技研工業株式会社 Membrane electrode structure with resin frame
US9286653B2 (en) 2014-08-06 2016-03-15 Google Inc. System and method for increasing the bit depth of images
US10818018B2 (en) * 2016-11-24 2020-10-27 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and non-transitory computer-readable storage medium
CN107044855A (en) * 2017-05-05 2017-08-15 南京信息工程大学 A kind of inertial measuring unit and method based on camera array
CN110392282B (en) * 2018-04-18 2022-01-07 阿里巴巴(中国)有限公司 Video frame insertion method, computer storage medium and server
US20240054657A1 (en) * 2022-08-15 2024-02-15 Nvidia Corporation Frame rate up-conversion using optical flow

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5241608A (en) * 1988-11-25 1993-08-31 Eastman Kodak Company Method for estimating velocity vector fields from a time-varying image sequence
EP0621578A2 (en) * 1993-04-22 1994-10-26 Matsushita Electric Industrial Co., Ltd. Driving apparatus for liquid crystal display
US5500904A (en) * 1992-04-22 1996-03-19 Texas Instruments Incorporated System and method for indicating a change between images
US6008865A (en) * 1997-02-14 1999-12-28 Eastman Kodak Company Segmentation-based method for motion-compensated frame interpolation

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2317525B (en) * 1996-09-20 2000-11-08 Nokia Mobile Phones Ltd A video coding system
US5912815A (en) * 1997-03-14 1999-06-15 The Regents Of The University Of California Local relaxation method for estimating optical flow
US6628715B1 (en) * 1999-01-15 2003-09-30 Digital Video Express, L.P. Method and apparatus for estimating optical flow

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5241608A (en) * 1988-11-25 1993-08-31 Eastman Kodak Company Method for estimating velocity vector fields from a time-varying image sequence
US5500904A (en) * 1992-04-22 1996-03-19 Texas Instruments Incorporated System and method for indicating a change between images
EP0621578A2 (en) * 1993-04-22 1994-10-26 Matsushita Electric Industrial Co., Ltd. Driving apparatus for liquid crystal display
US6008865A (en) * 1997-02-14 1999-12-28 Eastman Kodak Company Segmentation-based method for motion-compensated frame interpolation

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1308897A1 (en) * 2001-11-01 2003-05-07 Oplayo Oy Image interpolation
WO2008009981A1 (en) * 2006-07-21 2008-01-24 Snell & Wilcox Limited Picture attribute allocation
US8462170B2 (en) 2006-07-21 2013-06-11 Snell Limited Picture attribute allocation
CN109300143A (en) * 2018-09-07 2019-02-01 百度在线网络技术(北京)有限公司 Determination method, apparatus, equipment, storage medium and the vehicle of motion vector field
EP3621032A3 (en) * 2018-09-07 2020-03-18 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for determining motion vector field, device, storage medium and vehicle
JP2020042819A (en) * 2018-09-07 2020-03-19 バイドゥ オンライン ネットワーク テクノロジー (ベイジン) カンパニー リミテッド Method and apparatus for determining motion vector field, device, computer-readable storage medium, and vehicle
CN109300143B (en) * 2018-09-07 2021-07-27 百度在线网络技术(北京)有限公司 Method, device and equipment for determining motion vector field, storage medium and vehicle
US11227395B2 (en) 2018-09-07 2022-01-18 Apollo Intelligent Driving Technology (Beijing) Co., Ltd. Method and apparatus for determining motion vector field, device, storage medium and vehicle

Also Published As

Publication number Publication date
EP1252607A1 (en) 2002-10-30
ATE298117T1 (en) 2005-07-15
FI19992799A (en) 2001-06-29
CN1415105A (en) 2003-04-30
EP1252607B1 (en) 2005-06-15
DE60020887D1 (en) 2005-07-21
AU2378801A (en) 2001-07-09
FI108900B (en) 2002-04-15
US20040022419A1 (en) 2004-02-05
KR20020075881A (en) 2002-10-07
DE60020887T2 (en) 2006-03-23
JP2003518671A (en) 2003-06-10

Similar Documents

Publication Publication Date Title
WO2001048695A1 (en) Optical flow and image forming
KR100799088B1 (en) Fast digital pan tilt zoom video
US6393162B1 (en) Image synthesizing apparatus
CN1734500B (en) Image processing method and system by using robust bayesian estimation based on mode
US8233701B2 (en) Data reconstruction using directional interpolation techniques
US20090074328A1 (en) Image processing apparatus and image processing method
JP5107409B2 (en) Motion detection method and filtering method using nonlinear smoothing of motion region
KR20140135968A (en) Method and apparatus for performing super-resolution
KR20100139030A (en) Method and apparatus for super-resolution of images
JP2009520975A (en) A method for obtaining a dense parallax field in stereo vision
US6539128B1 (en) Method and apparatus for interpolation
CN101142614A (en) Single channel image deformation system and method using anisotropic filtering
US5933547A (en) Method for interpolating images
JP5566199B2 (en) Image processing apparatus, control method therefor, and program
US8472756B2 (en) Method for producing high resolution image
JP6772000B2 (en) Image processing equipment, image processing methods and programs
JPH11122621A (en) Method for calculating motion vector
DE102004026782A1 (en) Method and apparatus for computer-aided motion estimation in at least two temporally successive digital images, computer-readable storage medium and computer program element
JP2003203237A (en) Image matching method and device, and image coding method and device
KR101556625B1 (en) Method for interpolating image magnification
CN116503248A (en) Infrared image correction method and system for crude oil storage tank
JP5629483B2 (en) Image processing method, image processing apparatus, and program
JP5546342B2 (en) Image processing device
JP4366836B2 (en) Image conversion method and image conversion apparatus
Agui et al. Contour tracking and synthesis in image sequences

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2000987539

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2001 548343

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 1020027008406

Country of ref document: KR

Ref document number: 008180024

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 10168938

Country of ref document: US

WWP Wipo information: published in national office

Ref document number: 1020027008406

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 2000987539

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWG Wipo information: grant in national office

Ref document number: 2000987539

Country of ref document: EP

DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)