CA2087523C - Method of processing an image - Google Patents
Method of processing an imageInfo
- Publication number
- CA2087523C CA2087523C CA002087523A CA2087523A CA2087523C CA 2087523 C CA2087523 C CA 2087523C CA 002087523 A CA002087523 A CA 002087523A CA 2087523 A CA2087523 A CA 2087523A CA 2087523 C CA2087523 C CA 2087523C
- Authority
- CA
- Canada
- Prior art keywords
- image
- feature
- data
- generating
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/754—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries involving a deformation of the sample pattern or of the reference pattern; Elastic matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/20—Individual registration on entry or exit involving the use of a pass
- G07C9/22—Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder
- G07C9/25—Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/20—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
Abstract
A method of processing an image comprising the steps of: locating within the image the position of at least one predetermined feature; extracting from said image data representing each said feature; and calculat-ing for each feature a feature vector representing the po-sition of the image data of the feature in an N-dimen-sional space, said space being defined by a plurality of reference vectors each of which is an eigenvector of a training set of like features in which the image data of each feature is modified to normalise the shape of each feature thereby to reduce its deviation from a predeter-mined standard shape of said feature, which step is car-ried out before calculating the corresponding feature vector.
Description
1- 2~523 A METMOD OF PROCESSING AN IMAGE
This invention relates to a method of processing an image and particularly, but not exclu5ively, to the use of such a method in the recognition and encoding of images of 5 obj ects such as faces.
One field of obj ect recognition which is potentially useful is in automatic identity verification techniques for restricted access buildings or fund transfer security, for example in the manner discusBèd in Gs Pat. ~o. 2,229,305 published 10 Sept. l9, lggo In many such fund transfer transactions a user carries a card which includes machine-readable data stored magnetically, electrically or optically. One particular application of face recognition i5 to prevent the use of such cards by unauthorised personnel by storing face identifying 15 data of the correct user on the card, reading the data out, obtaining a facial image of the person seeking to use the card by means of a camera, analyzing the image, and comparing the results of the analysis with the data stored on the card for the correct user.
The storage capacity of such cards is typically only a few hundred bytes which is very much smaller than the memory space n~eeded to store a recognisable image as a frame of pixels. It is therefore necessary to use an image processing technique which allows the image to be characterised using the smaller number of memory bytes.
Another application of image processing which reduces the number of bytes needed to characterise an image is in hybrid video coding techniques for video telephones as disclosed in our earlier filed application published as US
patent 4841S75. In this and similar applications -the perceptually important parts of the image are located and the available coding data is preferentially allocated to those parts .
A known method of such processing of an image comprises 3S the steps oi: locatin within the image the position of at ieast one predeterminsd feature; extracting image data from _, . . .
WO 92/02000 PCl`/GB9~J~
20~7523 - 2 - ~
said image representing each said feature; and calculating for each feature a feature vector representing the positlon of the image data of the feature in an N-dimensional space, said space being defined ~y a plurality of reference vectors each 5 of which is an eigenvector of a training set of of images of l i ke f eatures .
The ~z~rhl~n~a-Loeve transform (KLT) is well known in the signal processing art ior various appIications. It has been prQposed to apply this transform to identification of human lO faces (Sirovitch and Kirby, J. Opt. Soc. Am. A vol 4 no 3, pp 519-524 ~Low Dimensional Procedure for the Characterisation of ~luman Faces ", and IEEE Trans on Pattern Analysis and Machine Intelligence Vol 12, no 1 pplO3-lQ8 "Application of the Karhunen-Loeve Procedure ior the Characterisation of Human 15 Faces" ). In these techniques, images of substantially the whole face of members of a reference population were proce6sed to derive a set of N eigenvectors each having a picture-like appearance (eigen-pictures, or caricatures ) . These were 6tored. In a subse~uent recognition phase, a given image of a 20 test face (which need not belong to the reference population) was characterised by its image vector Ln the N-dimensional space deiined by the eigenvectors. By comparing the image vector, or data derived therefrom, with the identlfication data one can generate a verLfication signa~ in dependence upon 25 the result of the comparison.
Howeverr despite the ease with which humans " never forget a face", the task for a machine is a formidable one because a person' 5 facial appearance can vary widely in many respects over time because the eyes and mouth, in the case of 30 facial image processing, for example, are mQbile.
According to a first aspect of the present invention a method of processing an image according to the preamble of claim 1 is characterised in that the method further comprlses the step of modifying the lmage data of each feature to 35 normalise the shape of each feature thereby to reduce its devlation from a predetermined standard shape of said feature, which step is carried out before calculating the corresponding feature vector.
.
WO 92/02000 PCl`/GB91~0~183 ~ 2087~23 We have found that recognition accuracy of images of faces, for example, can be improved greatly by such a modifying step~ which reduces the effects of a persons' s changi ng f aci al expres s i on.
In the case of an image of a face, for example, the predetermined fea~ure.could be the entire face or a part of it such as the nose or mouth. Several ~redetermined features may be located ana characterised as vectors in the corresponding space of eigenvectors if desired.
It will clear to those skilled in this field that the present invention is applicable to processing images of obj ects other than the faces of humans notwithstanding that the primary application envisaged by the applicant is in the field of human face images and that the discussion and specific examples of embodiments of the invention are directed to such images.
The invention also enables the use of fewer eigen-pictures, and hence results in a saving of storage or of transmission capacity.
Further, by modifying the shape of the feature towards a standard (topologically ecuivalent) feature shape, the accuracy with which the feature can be located is improved.
Preferably the training set of of images of like features are modified to normalise the shape of each of the training set of images thereby to reduce their deviation from a predetermined standard shape of said feature, which step is carried out before calcuLating the eigenvectors of the training set of images.
The method is useful not only for obj ect recognition, but also as a hybrid coding technique in which feature position data and feature representative data (the N-dimensional vector) are transmitted to a receiver where an image is assembled by combining the eigen-pictures corresponding to the image vector.
~: Pigen-plctures provide a means by which the variation in a set of related images can be extracted and used to represent those images and others like them. For instance, an W092/02000 ~ ~ PCI`~GB91~01183 - 4 - (~
~os7s23 eye image could be economically represented in terms of ' best' coordina~e system ' eigen-eyes~ . _ The eigen-plctures themselves are determined from a training set of representative images, and are formed such S that the first eigen-picture embodies the maximum variance between the images, and successive eigen-pictures have monotonically decreasing variance. An image in the set can +hen ~e expressed as a series, ~with the eigen-pictures effectively forming basis functions:
10 I = M + w1P1 + W-P2 + . . + wmPm where M = mean over entire training set of images Wi = component of the l' th eigen-picture P; = i~ th eigen-picture, o~ m, = ori gi nal i ma g e If we truncate the above series we still have the best representation we could for the given number of eigen-pictures, in a mean-square-error sense.
The basis of eigen-pictures is chosen such that they point in the directions of maximum variance, subject to being orthogonal. In other words, each tralning image is considered as a point in n-dimensional space, where ' n' is the size of the tralning images in~ pels; elgen-picture vectors are then chosen to lie on lines of maximum variance through the cl us ter ~ s ) produced.
Given training images I1, . . ., rm, we first form the mean image M, and then the difference :images (a. k. a.
' caricatures' ) Dj=Ij-M.
The first paragraph (above) is eauivalent to choosing our eigen-picture vectors Pk such that l'k = ( 1 ) ~ (Pk~D,) is maximised wi~h P~Pk=O, i~k The eigen-pictures Pk above are:~in fact the eigenvectors of a very large covariance matrix, the solution of which would WO 92~02000 2 ~ 8 7 ~ 2 3 PCr/CB9 1/01 l 83 ~ _ .5 _ , be intractable. }lowever, the probLem can be reduced to more manageable proportions by forming the matrix. L where L~, = Di TD, and solving for the eigenvectors vk of L.
The eigen-pictures can then be found by p~ = ~ (V,~.DJ) The term ' representation vector' has been used to refer to the vector whose components (wi) are the factors applied to each eigen-picture (Pi) in the series. That is lo n= (wl, w~, . . ., Wm) The representation vector equivalent to an image I is formed by taking tEIe lmler product of I' 5 caricature with each ei gen-pi cture .
wj=(I-M) Pi, for l<fi<~.
. Note that a certain assumption is made when it comes to representing an image taken from outside the training set used to create eigen-pictures; the image is assumed to be sufficiently ' similar' to those in training set to enable it to be well represented by the same eigen-pictures.
The representation of two images can be compared by calculating the Euclidean distance between them:
djj= /nj-nj/.
Thus, recognition can be achieved via a simple threshold, di3< T means recognised, or d}j can be used as a sliding confidence scale.
Deformable templates consist of parametrically defined geometric templates which interact with images to provide a - best fit of the template to a corresponding image feature.
For example, =a template for an eye might consist of a circle for the iris and two parabolas for the eye/eyelid boundaries, where size, shape and location parameters are variable.
An energy function is formed by integrating certain image attributes over template boundaries, and parameters are iteratively updated in an attempt to minimise this function.
WO 92/02000 PCI'/GB91/01183 208752~
,.
This has the effect of moving the template towards the best availabl`e fit in the given image.
The location within the image of the position of 2t least one predetermined feature may employ a- ~irst technique 5 to provide a coarse estimation of position and a second, different, = ~echni~ue to improve upon the coarse estimation.
The second technique preferably involves the use oi~ such a deformable template technique.
The deformable template technique requires certain 10 filtered images in addition to the raw image itself, notably peak, valley and eage images.~ ~Suitable processed images can be obtai ned us i ng morphol ogi c al f il ters, and lt i s thi s s tage which is detailed below.
Morphological ~filters are able~to provide a wide range 15 of iiltering functions including nonlinear image filtering, noise suppression, edge detection, skeletonization, shape recognition etc. All of these functions are provided via simple combinations oi two bas~c operations termed erosion and dilation. In our case we are only interested in valley, peak 2~ and edge detection. ~ ~ =
Erosion of greyscale images effectively causes bright areas to shrink in upon themselves, whereas dilation causes bright areas to erpand. An erosion followed by a dilation causes bright peaks to be lost (operator called =' open' .
25 Conversely, a dilation folIowed by an erosion causes dark valleys to be filled ioperator called ~ close' )_ ~ Forspecific details see Maragos P, (1987), "Tutorial on Advances in Morphological Image Processlng ana Analysis", ~ Optical Engineering. Vol 26. No. 7.
30 . In image processing systems of the kind to which the present~invention relates, it is often neceSSary to locate the obj ect, eg head or ~ace, within the image prior to processing.
Usually this is achieved by edge detection, but traditional edge detection techniques are purely local - an 35 edge is indicated whenever a gradient of ~mage i-ntensity occurs - and hence will not in general form an edge that is completely closed (ie. forms a loop around the head) but will instead create a number of.. edge segments which together WO 92/02000 2 0 8 7 ~ 2 3 PCl ~GB91~01183 - 7 - :
outline or partly outline the head. Post-proces6ing of some kind is thus usually necessary.
We have found that the adaptive contour model, or ~snake", technique is particularly effective for this purpose.
Preferably, the predetermine=d feature of the image is located by determining parameters of a closed curve arranged to lie adj acent a plurality of edge features of the image, said curve being constrained to exceed a minimum curvature and to have a minimum length compatible therewith. The boundary of the curve may be initially calculatea proximate the edges of the image, and subsequently interactively reduced.
Prior: to a detailed descr~ption of the physical embodiment of the invention, the ' snake' signal processing techniques mentioned above will now be described in greater 1 5 detail.
Introducea by Xass et al [ E~ass m, Witkin A, Terpozopoulus d. " Snakes: Active Contour Models", International Journal of Computer Vision, 321-331, 1988], snakes are a method of attempting to provide some of the post-processing that our own visual system performs. A snake has built into it various properties that are associated with both edges and the human visual system (Eg continuity, smoothness and to some extent the capability to fill in sections of an edge that have been occluded).
A snake is a continuous curve (possibly closed) that attempts to dynamically position itself from a given starting position in such a way that it ' clings~ to e ges in the image.
The form of snake that will be considered here consists of curves that are piecewise polynomial. That is, the curve is in general constructed from N segments {x}(s),yj(s)}i=1,. . . ,N
where each of the xl(s) and yj(s) are polynomials in the parameter s. As the parameter 5 is varied a curve is traced out.
From now on snakes will be referred to as the parametric curve u(s)=(x(s),y(s)) where s is assumed to vary between 0 and 1. What properties should an ~ edge hugging' s nake have ?
WO 92/02000 ~_ . ~ . PCr/GB91/01183 208~23 . - 8 - ~
The snake must be ' driven' by the image. That is, it must be able to detect an edge in the image and align itself with the edge. One way of achieving this is to try to position the snake such that the average ' edge strength' 5 (however that may be measured) along the length of the snake is maximised. If the measure of edge strength is F(x, y) >0 at the image point (x, y) then this amounts to saying that the snake u(s) is to be chosen in such a way that the functional ~F(x(s),y(s))ds . . . ( 1 .~o is maximised. This will ensure that the snake will tend to mould itself to edges in the lmage if it finds them, but does not guarantee that it will find ~hem in the first place.
Given an image, the functional may have many local minima-static problem: finding them is where the ' dynamics' arise.
An edge detector applied to an image will tend to praduce an edge map consisting of mainly thin edges. This means that the edge strength function tends to be zero at most places in the image, apart from on a few lines. As a consequence a snake placed some distance from an edge may not 20 be = attracted towards the edge because the edge strength is effectively zero at the snakes inltial position. To help the snake come under the influence of an edge, the edge image is blurred to broaden the width of the edges.
If an elastic band were held around a convex object and 25 then let go, the band would contract until the object prevented it from doing so further. At this point the band would be moulded to the object, thus describing the boundary.
Two forces are at work here. Firstly that providing the natural tendency of . the band to contract; secondly the 30 opposing force provided by the obj ect. The band contracts because it tries to minimise its elastic energy due to stretching. If the band were described by the parametric curve u(s)=(x(s),y(s)) then the elastic energy at any point s^
is proportional to WO 92/02000 2 ~ 8 7 5 2 3 PCr/GB91/01~83 _ g (~ ~ (~IJ (dsl~) That is, the energy is proportional to the square of how much the curve is being stretched at that point. The el as t i c e ne rgy a l ong i t s e nti re l e ngth, gi ve n t he c ons t rai nt 5 of the object, is minimised. Hence the elastic band assumes the shape of the curve u(s)=(x(s),y~s)) where the u(s) minimises the functional ~((dX)2 (dy)2} ... (2) subj ect to the constraints of the obj ect. We would like 10 closed snakes to have analogous behaviour. That is, to have a tendency to contract, but to be preYented from doing so by the obj ects in an image. To model this behaviour the parametric curve for the snake is chosen so that the functional (2) tends to be minimised. If in addition the forcing term (1) were included then the snake would be prevented from c~ontracting ' through objects~ as it would be attracted toward their edges. The attractive force would also tend to pull the snake into the hollows of a concave boundary, provided that the restoring ' elastic force' was not too great.
One of the properties of the edges that is difficult to model is their behaviour when they can no longer be seen. If one were looking at a car and a person stood in front of it, few would have any difficulty imagining the contours of the edge of the car that were occluded. They would be ' smooth' extensions of the contours either side of the person. I f the above elastic band approach were adopted it would be found that the band formed a stralght line where the car was occluded ~because it tries to minimlse energy, ana~thus length in this situation). If however the band had some stiffness (that is a resistance to bending, as for example displayed by a flexible bar) then it would tend to form a smooth curve in the occluded region of the image and be tangential to the boundaries on either side.
~ ~ 8 7 ~ 2 3 PCI /GB91/01183 -- 10 -- ~ --.
Again a flexible bar tends to form a shape so that its elastic energy is minimised. The elastic energy in bending is dependent on the curvature of the bar, that is the second derivatives. To help force the snake to emulate this type of S behavlour the parametric curve ~(s)=(x(s),y(s)) is chosen so it tends to minimise the functional ~ (d x) + (d y) ~ds . . . (3) which represents a pseudo-bending energy term. Of course, if a snake were made too stiff ~it would be difficult to force it 10 to conform to highly curved boundaries under the action of the f orci ng te rm ( 1 ) .
Three desirable properties of snakes have now been identified To incorporate al+ three into the snake at once the parametri c curve u ( s ) = ~ x ~s ), y ( s ) ) repres enti ng the s nake l 5 i s c hos e n s o th at i t mi ni mi s e s the f unc ti o nal l(x(s),y(s)) =
s~l [(ds2~ ~ds2) ] ~(S)[(ds) + (ds) ] -F(X(s)7y(s))lds (4) Here the terms a(s)>0 and ~(s)> 0 represent respectively the amount of stiffness and ela6ticlty that the snake is to have. It is clear that if the snake approach is 20 to be successful then the correct balance of these parameters is crucial. Too much stiffness and the snake will not correctly hug the boundaries; too much elasticity and closed snakes will be pulled across boundaries and contract to a point or may even break away from boundaries at concave .25 regions. The negative sign In front of the forcing term is because minimising -I F(x, y)ds is equivalent to maximising F(x, y)ds.
As it stands, minimising the functional (4) is trivial.
If the snake is not closed then the solution degenerates into 30 a single point (x~s~,y(s))=constant, where the polnt is chosen to minimise the edge strength F(x(s),y(s)). Physically, this WO 92/02000 2 0 8 7 ~ ~ 3 PCl`/GB9ltO1183 is because the snake will tend to pull its two ena points together in order to minimise the elastic energy, and thus s hri nk t o a s i ngl e p o i nt . The gl obal mi ni mum i s att ai ne d a t the point in the lmage where the edge strength is largest. To 5 prevent this from occurring it is necessary to fix the positions of the ends of the snake in some way. That is, ' boundary conditions' are required. It turns out to be necessary to fi~c more than j ust the location of the end points and two further conditions are required for a well posed 10 problem. A convenient condition is to impose zero curvature at each end point.
Similarly, the global minimum for a closed-loop snake occurs when it contracts to a single point. However, in contrast to an fixed-end snake, additional boundary conditions 15 cannot be applied to eliminate the degenerate solution. The degenerate solution in this case is the true global minimum.
Clearly the ideal situation is to seek a local minimum in the locality of the initial position of the snake. In practice the problem that is solved is weaker than this: find 20 a curve û(s)=(x(s),y(s)) e H [O, 1]xH [O, 1] such that (~( ) ~( ))~ = O; v(s) ~ Ho2[0,1 ] X Ho [0,1 ] ( 5 ) Here ~ [O, 1] denotes the class of real valued functions defined on [O, 1] that have ' finite energy' in the second derivatives (that is the integral of the square of the second 25 derivatives exists [Keller HB. Numerical Methods for Two-Point Boundary Value Problems,Blaisdell, 1968] and Ho[O, 1] is the class of functions in H [O, 1] that are zero at s=O and s=1.
To see how this relates to finding a minimum consider u(s) to be a local minimum and û(s)+~v(s) to be a perturbation about 30 the minimum that satisfies the same boundary conditions (ie v(O~=v(l)==O).
Clearly, considered as a function of ~, I(~)=Iû(s)+~v(s)) is a minimum at~=O. Hence the derivative of I (~) must be zero at ~=0. Equation (5) is therefore a 35 necessary condition for a local minimum. Although solutions to ( 5 ) are not guaranteed to be minima for completely general WO 92/02000 .~ PCI`/GB91/01183 2087~23 - 12- ~
edge strength function6, it has been found in practice that solutions are indeed minima.
Standard arguments in the calculus of variations show that problem (5) is equivalent to another problem, which is 5 simpler to solve. find a curve (x~s),y~s~) e C [0,1]xC [0,1]
that satisfies the pair of fourth o~der ordinary differential e quati ons ~
dsZ{ ds2 } ds { ds } 2 ~x ~ . ( 6 ) -d Z{~(S) d g} + d {~s~ d~} + 2 - ¦ = (7) together with the boundary conditions x(O),y(O),x(1),y(1) given, and ds l.~ dsZ ~ dsZ l,l ds2 ~ ( 7 ) The statement of the problem is for the case of a fixed-end snake, but if the snake is to form a closed loop then the boundary conditions above are replaced by periodicity 15 conditions. Both of these problems can easily be solved using finite differences.
The finite difference approach starts by discretising the interval [0, 1] into N-1 equispaced subintervals of length h=1\ (N-1 ) and defines a set of nodes {( i)}~ l where si=(i-l)h The method seeks a set of approximations ~(Xi-Yl)} ~ to {(x(s~),y(S~ N
by replacing the differential equations ~6) and (7) in the continuous variables with a set of difference equations ln the 25 discrete variables [ Keller HB., ibid. ]. Replacing the derivatives in ~6~ by difference ap~roximations at the point s; gives WO 9Z/fJ2000 2 0 8 7 5 2 3 Pcr~GB9l/ol .83 h2{ai~l h2 _ 2a~ ) + a (Xj-2XI l+xi 2) }
+ h--{~i l i h ~~ i h ~1 } + 2 a ¦ = ; for i=3,4...,N-2 ... (9) where aj=atsi) and ~ (Si) Similarly a difference approximation to (7) may be derived. Note that the difference e~auation only holds at internal modes in the interval where 5 the indices referenced lie in the range I to N. Collecting like terms together, ( 9 ) can be written as aixj 2 + bjxj l + CiXi + diXi l l + eiXi ,2 =fi where aj=- i~
b' = i + i-l + i h4 h4 h2 C.=_~ill +4ai+ ~
~ h4 h4 h4 h2 h2J
d.= i,l + i *
h4 h4 h2 ei=~ i~
f 1 ~F¦
2 ~x l(s ~;) Discretislng both the differential equations ~6) and 10 (7) and taking boundary conditions into account, the finite difference approximations _={x;} and y=lY;} to {xl} and {y;) }, respectively, satisfy the following system of the algebraic e :auations 15 K_=f (2~, Y), Kv=g(x, Y) . . . (10) wo 92/02000 2 0 8 7 ~ 2 3 pcr/GB9l/o~
-- 1 4 -- =
The structure of the matrices R and the right hand vectors f and g are different depending on whether closed or open snakQ boundary conditions are used. If the snake is closed then fictitious nodes at SO, S_1, SN+1 and SN+2 are 5 introduced and +he aifference equation (9) is applied at nodes 0, 1, N-1 and N.
P~r; o~1~; ty implies that5, xO=xN~ X_l=XN_I, XN+I=XI and x2=xN+2. With these conditions in force the coefficient matrix becomes 'cl dl el ai b, `
b2 c2 d2 e~ a2 a3 b3 c3 d3 e3 a4 b4 c4 d4 e4 K-eN_l aN-I bN-I CN-I dN-I
dN eN aN bN CN
10 ~ ' and the right hand side vQctor is ~f1, f-, ~ fN) ~ ~
For fixea-end snakes ficti~ious nodQs at~SO and SN+I are introduced and the difference equation (9) is appLied at nodes 1~ Sl ~nd SN+I. Two extra difference equations are introauced to approxlmate the zero curvature boundary c~onditions-d2x ¦ d2x ¦
ds2 Is ds2 Is namely xo-2x1+x2=0 and x_l-2xN+xN+1=0. The coefficient matrix is now WO 92/02000 2 0 8 7 5 2 3 PCr~GB9~Jo~ 183 /(C2-a2) d~
b3 C3 d3 e3 a4 b4 C4 d4 e4 K= a5 b5 ~5 5 C5 aN 2 bN 2 CN_~ dN-2 aN_ I bN- I (CN_ I eN~
and the right hand side vector is (f2-12a2+b2)Xl, f3-a3xl, fN-3~ fN-2eN_2XN/ fN_1-~2eN_l+dN_l)xN) The right hand side vector for the difference equations corresponding to (7) is derived in a similar fashion.
The system ( 10) represents a set of non-linear equations that has to be solved. The coefficient matrix is symmetric and ~ositive definite, and banded for the fixed-end snake. For a closed-loop snake ~7ith periodic boundary conditions it is banded, apart from a few off-diagonal entries. As the system is non-linear it is solved iteratively. The iteration performed is K2~n~l f(~n~Yn) for n=0,1,2,...
+KYn.l= g(~n~Yn) for n=0,1,2,...
where ~>0 is a stabilisation parameter. This can be rewritten 15 as y )Xnll ~Xn+f(X~ Yn) for n=o~l~2~ -y l~Yn ~ ~ = y Yn +g(Xn~Yn) for n=0,1,2,---W092/02000 2~8~23 r = ~ PCI/GB9l/0l183 This systçm has to be solved for :each n. For a closed-loop snake the matrix on the =left hand side i8 dlfficult to invert directly because the terms that are outside the main diagonal band destroy the band structure. In general, the 5 coefficient matrix R can be split into the sum of a banded matrix B plus a non-banded matrix A; R=A+B. For a fixed-end snake the matrix A would be zero. The system of equations is now solved for each n by performing the iteration (B+~ Ax(k)l +--x +f(x ,y ) for k=0,1,2,...
(B+ 1 I)y(~ll) Ay(~) + 1y +S~(~ ) f k O 1 2 The matrix (~ + I/y) is a band matrix and can be ~ 10 expres5ed a5 a product of Cholesky factors LL ¦ Johnson Reiss, icid. ¦. The systems are solved at each stage ~y first solving Li~ = -A~ + y~n +f(Xn~Yn) Ly(~l~) = -Ayc~), + yYn +~(~nlYn) f ol l owed by 15 L~,,)=~
L T (~
Y": I Yn 1 l ,, Notice that the Cholesky decomposition only has to be performed once.
Model-based coding schemes use 2-D or 3-D models of 20 scene objects in order to reduce the redundancy in the information needed to encode a moving secuence of images. The location and tracking of moving obiec.+s is of fundamental importance of this~ Videoconference and videophone type scenes may present difficul+les for conventional machine WO 9~102000 2 0 8 7 5 2 3 PCI/GB91/01183 vision algorithms as there can often be low contrast and ~ fuzzy' moving boundaries between a pcrson' s hair and the background. Adaptive contour models or ' snakes' form a class of technigues which are able to locate and track object 5 boundaries; they can fit themselves to low contrast boundaries and can fill in across boundary segments between which there is little or no local evidence for an edge. This paper discusses the use of snakes for isolating the head boundary in images as well as a technigue which combines block motion 10 estimation and the snake: the ' power-assisted' snake.
The snake is a continuous curve (possibly closed) which attempts to dynamically position itse1f irom a given starting position in such a way that it clings to edges in the image.
Full details of the implementation for both closed and ' fixed-15 end' snakes are given in Waite JB, Welsh WJ, " Head BoundaryI.ocation using Snakes", British Telecom Technology Journal, Vol 8, No 3, ,July 1990, which describes two alternative implementation strategies: finite elements and finite differences. We implemented both closed and fixed-end snakes 20 using finite differences. The snake is initialised around the periphery of a head-and-shoulders image and allowed to contract under its own internal elastic force. It is also acted on by forces derived from the image which are generated by first processing the image using a ~aplacian-type operator 25 with a large space constant the output of which is rectified and modified using a smooth non-linear function. The rectification results in isolating the ' valley~ features which have been shown to correspond to the subj ectively important boundaries in facial images; the non-linear function 30 effectively reduces the weighting of strong edges relative to weaker edges in order to give the weaker boundQries a better chance to influence the snake. After about 200 i~:erations of the snake it reaches the position hugging the boundary of the head. In a second example, ~a fixed-end snake with its end 35 points at the bottom corners of :the image was allowed to contract in from the sides and top of the image. The snake stabilises on the boundar.y between hair znd background although this is a relatively low-contrast boundary in the WO 92/02000 ` = ~ . PCT/GB91~1183 2~8~23 - 18- ~
image. As the snake would face problems trying to contract across a patterned background, it might be better to derive the image forces from a moving edge detector.
In Kass et al, ibid., an example is shown of snakes 5 being used to track the moving lips of a person. First, the snake is stabilised on the lips ln the first frame of a moving sequence of image6; in the second frame i~ is initialised in the position corresponding to its stable position in the previous frame and allowed to achieve equilibrium again.
10 There is a clear problem with the technique in this form in that if the motion is too great between frames, the snake may lock on to different features in the next frame and thus lose track. Kass suggests a remedy using the principle of ' scale-space continuation': the snake is allowed to stabilise first 15 on an image which has been smoothed using a Gaussian filter with a large space constant; this has the effect of pulling the snake in from a large distance. After equilibrium has occurred, the snake is presented with a new set of image forces derived by using a Gaussian with slightly smaller space 20 constant and the process lS continued until equilibrium has occurred in the image at the highest level of resolution possible.
This is clearly a computationally expensive process; a radically simpler technique has been developed and found to 25 work welL and this will now be described. After the snake has reached equilibrium in the first irame of a sequence, block motion estimation is carried out at the positions of the snake nodes (the snake is conventionally implemented by approximating it with a set of discrete nodes - 24 in our 30 impIementation). The motion estimation is performed from one frame into the next frame which is the opposite sense to that conventionally performed during motion compensation for video coding. If the best match positions for the blocks are plotted in the next frame then, due to the ' aperture problem~, 35 a good match can often be found at a range of points along a ooundary segment which is longer than the side of the block oeing matched. The effect is to ~roduce a non-uniform spacing of the points. The snake is then initialised in the WO 92/02000 2 0 8 7 5 2 3 _PCI/GB91/01183 -- 1 9 -- - ~
frame with its nodes atthe positions of the best match block positions and allowed to run for a few iterations, typically ten. The result is the node~ are now uniformly distributed along the boundary; the snake has successfully tracked the 5 boundary, the block motion estimation having acted as a sort of ' power-assist' which will ensure tracking is maintained as long as the overall motion is not greater than the maximum displacement of t~e block search. As the motion estimation is performed at only a small set of points, the computation time 10 is not increased significantly;
Both fixed-end and closed snakes have been shown to perform object boundary location even in situations where they may be a low contrast between the obj ect and its background.
A composite technique using both block motion 15 estimation and snake fitting has been shown to perform boundary tracking in a sequence of moving images. The technique is simpler to implement than an equivalent coarse-to-fine resolution technique. The methods described in the paper have so far been tested in images where the obj ect 20 boundaries do~not have very great or discontinuous curvature at any point; if these conditions are not met, the snake would fail to conform itself correctly to= the boundary contour. One solution, currently being pursued, is to effectively split the boundaries into a number of shorter 25 segments and fit these segments with several fixed-end snakes.
According to a second aspect of the present invention a method of -~erifying the identity of the user of a data carrier comprises: generating a digital facial image of the user; receiving the data carrier and reading therefrom 30 identification data; performing the method of the first aspect of the present~invention; comparing each feature vector, or data derived therefrom, wit~ the identification data; and generating a verification signal ln depenaence upon the comparison.
According to a yet fur~her aspect of the present invention apparatus for verifyirig the identity of the user= of a data carrier comprises: means for generating a digital facial image of the user; means for receiving the data carrier WO 92/02000 ~ PCI`/GB91/01183 2~87~23 20-and reading therefrom identification data; means for performing the method of the first aspect of the present invention; and means for comparing each feature vector, or data derived therefrom, with the identification data and S generating a verificatiQn signal ~In aepenaence upon the comparison.
An embodiment of the invention will now be described, by way of example only, with reference to the accompanying drawings in which: ~
Figure 1 is flow diagram of the calculation of a feature vector;
Figure 2 illustrates ap~aratus for credit verification using the method of.the pres~ent invention; and Figure. 3~ ~LLlustrates~ ~ the method of the present 15 i nventi on.
Referring to Pigure l, an overview of an embodiment incorporating both aspects of the invention will be described.
An image of~ the human face is captured in some manner - for example by using a video camera or~ by a photograph - and 20 digitised to provide an array of pixel Yalues. A head detection algorithm is employed to locate the position within the array of the face or head. This head location stage may comprise one of several known methods but is preferably a method using the above described "snake" techniques. Pixel 25 data lying outside the boundaries thus determined are ignored.
The second step is carried out on the pixel data lying inside the boundaries to locate the features to be used for recognition - typically the eyes and mouth. Again, several location techniques for finding the position of the eyes and 30 mouth are known from the prior art, but preferably a two stage process of coarse location followed by fine location is employed. ~ The coarse location technique might, = for example, be. that described in US 4841575.
The fine location t~echnique preferably uses the 35 deformable template .echnique described by Yuille et al ~ Feature Extraction from Faces using Deformable Templates", Harvard Robotics Lab Technical Report Number; 88/2 published in Computer Vision and Pattern Recognition, June 1989 IEEE.
. _ _ _ _ _ _ ~ _ _ ~ . . .... .
wo 92/n2000 2 0 8 7 ~ 2 3 PCI`/GB91/01183 In this technigue, which has been described above, a line model topologically equivalent to the feature is positioned (by the coar~e location technique) near the feature and is iteratively moved and deformed until the best fit is obtained.
s The feature is identified as being at this position.
Next, the shape of the feature is changed until it assumes a standard, topologically equivalent, shape. If the fine location technique utilised deformable templates as disclosed above, then the deformation of the feature can be 10 ~achieved to some extent by reversing t~e deformation of the template to match the feature to the i~itial, standard, shape of the template.
Since the exact position of the feature is now known, and its exact shape is specified, recognition using this 15 information can be employed as identifiers of the image supplementary to the recognition process using the feature vector of the feature. All image data outside the region identified as being the feature are ignored and the image data identified as being the feature are resolved into its 20 orthogonal eigen-picture components corresponding to that feature. The component vector is then compared with a component vector corresponding to a given person to be identified and, in the event of substantial similarity, recognition may be indicated.
Referring to Figure 2, an ~mho~; ment of the invention suitable for credit card verification ~ill now be described.
A video camera 1 receives an image of a prospective user of a credit card terminaI. Upon entry of the card to a card entry device 2, the analogue output of the video camera 1 is digitised by an AD converter 3, and sequentially clocked into a framestore 4. A video processor 5 (for example, a suitable processed digital signal processing chip such as that AT&T DSP 20 ) is connected to access the framestore 4 and processes the digital image therein to form and edge-enhanced 3~ image. One method of doing this is simply to subtract each sample from its predecessor to form a difference picture, but a better method involves the use of a Laplacian type of operator, the output of which is modified by a sigmoidal function which suppre6ses small levels of activity due to noise as well as very strong edges whilst leaving intermediate values barely changed. 3y this means, a smoother ed~e image is generated, and weak edge contours such as those around the 5 line o~ the chin are enhanced. This edge picture is stored in ~n edge picture frame~buffer 6. The processor then executas a closed loop snake method, u6ing finite differences, to derive a boundary which encompasses the head. Once the snake algorithm has converged, the position of the boundaries of the 10 head in the edge image and hence the corresponding image in the frame store 4 is now in force. ~ ~
The edge image in thé framestore 6 is then processed to derive a coarse approximation to the location of the features of interest - typically the eyes and the mouth. The method of 15 Nagao is one suitable technique (Nagoa M, " Picture Recognition and Data Structure", Graphic Languages - Ed. Rosenf~eld) as described in our earlier application EP0225729. The estimates o f p os i ti o n thus~ de ri ved a re us e d as s t arti ng pos i ti ons f o r the dynamic template process which establishes the exact 20 feature position.
Accordingly, pr=ocessor 5 em~loys~ the method described in Yuille et al [Yuille A, Cohen D, Hallinan P, ~1988~, ~ Facial Feature Extraction by Deformable Templates", ~Harvard Robotics Lab. Technical Report no. 88-2] to derive position 25 data for~each feature which consists of a size (or resolution) and a series of~point coordinates given as fractions of the total size of the template. Certain of these points are designated as key,ooints: which are always internal to the template, the other points always being edge points.~ These 30 key poin~ position data are stored, and may also be used as recognition indicia. This is indicated in Figure 3.:
Next, the geometrical transformation of the feature to the standard shape is performed by the processor S_ This transformation takes the form of a mapping between triangular 35 facets of the regions and the templates. The facets=consist of local collections of three points and are defined in the template definition files. The mapping is formed by considering the x, y values of :template vertices with each of WO 92/02000 2 0 ~ 7 ~ 2 ~ -P~iGB91~oll83 the x', y' values c-` the corresponding region vertices - this yields two plane equations from which each x', y' point can be calculated given any x, y within the template facet, and thus the image data can be mapped from the region sub-image.
The entire template sub-image is obtained by rendering (or scan-converting) each con6tituent facet pel by pel, taking the pel' s value from the corresponding mapped location in the equivalent region' 5 sub-image.
The proce6sor 5 is arranged to perform the mapping of the extracted region sub-images to their corresponding generic template size and shape. The keypoints on the regions form a triangular mesh with a corresponding mesh defined for the generic template shape; mappings are then formed from each triangle in the generic mesh to its equivalent in the region 15 mesh. The distorted sub-images are then created and stored in the template data structures ~or later display.
The central procedure in this module i5 the ' template stretching' procedure. This routine creates each distorted template sub-image facet by facet (each facet is defined by 20 three connected template points ). A mapping is obtained from each template facet to the corresponding region facet and then the template facet is fillea In pel: by pel with image data mapped from the region sub-image. After all facets have been processed in this way the distorted template sub-image will 25 have been completely filled in with image data.
The standardised feature image thus produced is then stored in a feature image buffer 7. An eigen-picture buffer 8 which contains a plurality (for example 50) of eigen-pictures of irLcreasing sequency which have previously been 30 derived in known manner from a representative population (preferably using an equivalent geometric normalisation t e chni que t o that d_ s c l os ed abo ve ) . A tra ns f o rm p ro ce s s or 9~which may in practice be realised as processor 5 acting under suitable stored instructions ) derives the co-ordinates or 35 components of the feature image with regard to each eigen-picture, to give a vector of 50 numbers, using the method described above. The card entry device 2 reads from the inserted credit card the 50 components which characterise the . _ _ _ , .. ... . . . _ . . . . . , . . , _ _ _ _ _ w092~02000 2087 ~23 ~ PC~/GB91/01183 - 24 - ~
correct user of that card, which are input to a comparator 10 (which may again in practice be realised as part of a single processing device) which measures the distance in pattern space between the two connectors. The preferred metric is the 5 Euclidian distance, although other distance metrics (eg. " a city block" metric) could equally be used. If this distance is less than a predetermined threshold, correct recognition is indicated to an output lf;- otherwise, recognition failure i8 si gnalled.
Cther data may also be incorporated into the recognition process; for example, data derived during template deformation, or head measurements (e. g. the ratio of head height to head width derived during the head location stage) or the feature position data as mentioned above. Recognition 15 results may be combined in the manner indicated in our earlier application GB9OQ5190. S.
Generally, some preprocessing of the image is provided (indicated schematically as 12 in Figure 2); for example, noise filtering (spatial or temporal) and brightness or 20 contrast normalisation.
Variations in lighting can produce a spatially variant effect on the image brightness due to shadowing by the brows etc. It may be desirable to further pre-process the images to remove most of this variation ~by using a second derivative 25 operator or morphological filter in place of the raw image data currently used. A blurring filter would probably also be requi red.
It might also be desirable to reduce the effects of variations in geometric normalisation on the representation 30 vectors. This could be accomplished by using low-pass ~iltered images throughout which should give more stable representations for recognition purp~ses.
This invention relates to a method of processing an image and particularly, but not exclu5ively, to the use of such a method in the recognition and encoding of images of 5 obj ects such as faces.
One field of obj ect recognition which is potentially useful is in automatic identity verification techniques for restricted access buildings or fund transfer security, for example in the manner discusBèd in Gs Pat. ~o. 2,229,305 published 10 Sept. l9, lggo In many such fund transfer transactions a user carries a card which includes machine-readable data stored magnetically, electrically or optically. One particular application of face recognition i5 to prevent the use of such cards by unauthorised personnel by storing face identifying 15 data of the correct user on the card, reading the data out, obtaining a facial image of the person seeking to use the card by means of a camera, analyzing the image, and comparing the results of the analysis with the data stored on the card for the correct user.
The storage capacity of such cards is typically only a few hundred bytes which is very much smaller than the memory space n~eeded to store a recognisable image as a frame of pixels. It is therefore necessary to use an image processing technique which allows the image to be characterised using the smaller number of memory bytes.
Another application of image processing which reduces the number of bytes needed to characterise an image is in hybrid video coding techniques for video telephones as disclosed in our earlier filed application published as US
patent 4841S75. In this and similar applications -the perceptually important parts of the image are located and the available coding data is preferentially allocated to those parts .
A known method of such processing of an image comprises 3S the steps oi: locatin within the image the position of at ieast one predeterminsd feature; extracting image data from _, . . .
WO 92/02000 PCl`/GB9~J~
20~7523 - 2 - ~
said image representing each said feature; and calculating for each feature a feature vector representing the positlon of the image data of the feature in an N-dimensional space, said space being defined ~y a plurality of reference vectors each 5 of which is an eigenvector of a training set of of images of l i ke f eatures .
The ~z~rhl~n~a-Loeve transform (KLT) is well known in the signal processing art ior various appIications. It has been prQposed to apply this transform to identification of human lO faces (Sirovitch and Kirby, J. Opt. Soc. Am. A vol 4 no 3, pp 519-524 ~Low Dimensional Procedure for the Characterisation of ~luman Faces ", and IEEE Trans on Pattern Analysis and Machine Intelligence Vol 12, no 1 pplO3-lQ8 "Application of the Karhunen-Loeve Procedure ior the Characterisation of Human 15 Faces" ). In these techniques, images of substantially the whole face of members of a reference population were proce6sed to derive a set of N eigenvectors each having a picture-like appearance (eigen-pictures, or caricatures ) . These were 6tored. In a subse~uent recognition phase, a given image of a 20 test face (which need not belong to the reference population) was characterised by its image vector Ln the N-dimensional space deiined by the eigenvectors. By comparing the image vector, or data derived therefrom, with the identlfication data one can generate a verLfication signa~ in dependence upon 25 the result of the comparison.
Howeverr despite the ease with which humans " never forget a face", the task for a machine is a formidable one because a person' 5 facial appearance can vary widely in many respects over time because the eyes and mouth, in the case of 30 facial image processing, for example, are mQbile.
According to a first aspect of the present invention a method of processing an image according to the preamble of claim 1 is characterised in that the method further comprlses the step of modifying the lmage data of each feature to 35 normalise the shape of each feature thereby to reduce its devlation from a predetermined standard shape of said feature, which step is carried out before calculating the corresponding feature vector.
.
WO 92/02000 PCl`/GB91~0~183 ~ 2087~23 We have found that recognition accuracy of images of faces, for example, can be improved greatly by such a modifying step~ which reduces the effects of a persons' s changi ng f aci al expres s i on.
In the case of an image of a face, for example, the predetermined fea~ure.could be the entire face or a part of it such as the nose or mouth. Several ~redetermined features may be located ana characterised as vectors in the corresponding space of eigenvectors if desired.
It will clear to those skilled in this field that the present invention is applicable to processing images of obj ects other than the faces of humans notwithstanding that the primary application envisaged by the applicant is in the field of human face images and that the discussion and specific examples of embodiments of the invention are directed to such images.
The invention also enables the use of fewer eigen-pictures, and hence results in a saving of storage or of transmission capacity.
Further, by modifying the shape of the feature towards a standard (topologically ecuivalent) feature shape, the accuracy with which the feature can be located is improved.
Preferably the training set of of images of like features are modified to normalise the shape of each of the training set of images thereby to reduce their deviation from a predetermined standard shape of said feature, which step is carried out before calcuLating the eigenvectors of the training set of images.
The method is useful not only for obj ect recognition, but also as a hybrid coding technique in which feature position data and feature representative data (the N-dimensional vector) are transmitted to a receiver where an image is assembled by combining the eigen-pictures corresponding to the image vector.
~: Pigen-plctures provide a means by which the variation in a set of related images can be extracted and used to represent those images and others like them. For instance, an W092/02000 ~ ~ PCI`~GB91~01183 - 4 - (~
~os7s23 eye image could be economically represented in terms of ' best' coordina~e system ' eigen-eyes~ . _ The eigen-plctures themselves are determined from a training set of representative images, and are formed such S that the first eigen-picture embodies the maximum variance between the images, and successive eigen-pictures have monotonically decreasing variance. An image in the set can +hen ~e expressed as a series, ~with the eigen-pictures effectively forming basis functions:
10 I = M + w1P1 + W-P2 + . . + wmPm where M = mean over entire training set of images Wi = component of the l' th eigen-picture P; = i~ th eigen-picture, o~ m, = ori gi nal i ma g e If we truncate the above series we still have the best representation we could for the given number of eigen-pictures, in a mean-square-error sense.
The basis of eigen-pictures is chosen such that they point in the directions of maximum variance, subject to being orthogonal. In other words, each tralning image is considered as a point in n-dimensional space, where ' n' is the size of the tralning images in~ pels; elgen-picture vectors are then chosen to lie on lines of maximum variance through the cl us ter ~ s ) produced.
Given training images I1, . . ., rm, we first form the mean image M, and then the difference :images (a. k. a.
' caricatures' ) Dj=Ij-M.
The first paragraph (above) is eauivalent to choosing our eigen-picture vectors Pk such that l'k = ( 1 ) ~ (Pk~D,) is maximised wi~h P~Pk=O, i~k The eigen-pictures Pk above are:~in fact the eigenvectors of a very large covariance matrix, the solution of which would WO 92~02000 2 ~ 8 7 ~ 2 3 PCr/CB9 1/01 l 83 ~ _ .5 _ , be intractable. }lowever, the probLem can be reduced to more manageable proportions by forming the matrix. L where L~, = Di TD, and solving for the eigenvectors vk of L.
The eigen-pictures can then be found by p~ = ~ (V,~.DJ) The term ' representation vector' has been used to refer to the vector whose components (wi) are the factors applied to each eigen-picture (Pi) in the series. That is lo n= (wl, w~, . . ., Wm) The representation vector equivalent to an image I is formed by taking tEIe lmler product of I' 5 caricature with each ei gen-pi cture .
wj=(I-M) Pi, for l<fi<~.
. Note that a certain assumption is made when it comes to representing an image taken from outside the training set used to create eigen-pictures; the image is assumed to be sufficiently ' similar' to those in training set to enable it to be well represented by the same eigen-pictures.
The representation of two images can be compared by calculating the Euclidean distance between them:
djj= /nj-nj/.
Thus, recognition can be achieved via a simple threshold, di3< T means recognised, or d}j can be used as a sliding confidence scale.
Deformable templates consist of parametrically defined geometric templates which interact with images to provide a - best fit of the template to a corresponding image feature.
For example, =a template for an eye might consist of a circle for the iris and two parabolas for the eye/eyelid boundaries, where size, shape and location parameters are variable.
An energy function is formed by integrating certain image attributes over template boundaries, and parameters are iteratively updated in an attempt to minimise this function.
WO 92/02000 PCI'/GB91/01183 208752~
,.
This has the effect of moving the template towards the best availabl`e fit in the given image.
The location within the image of the position of 2t least one predetermined feature may employ a- ~irst technique 5 to provide a coarse estimation of position and a second, different, = ~echni~ue to improve upon the coarse estimation.
The second technique preferably involves the use oi~ such a deformable template technique.
The deformable template technique requires certain 10 filtered images in addition to the raw image itself, notably peak, valley and eage images.~ ~Suitable processed images can be obtai ned us i ng morphol ogi c al f il ters, and lt i s thi s s tage which is detailed below.
Morphological ~filters are able~to provide a wide range 15 of iiltering functions including nonlinear image filtering, noise suppression, edge detection, skeletonization, shape recognition etc. All of these functions are provided via simple combinations oi two bas~c operations termed erosion and dilation. In our case we are only interested in valley, peak 2~ and edge detection. ~ ~ =
Erosion of greyscale images effectively causes bright areas to shrink in upon themselves, whereas dilation causes bright areas to erpand. An erosion followed by a dilation causes bright peaks to be lost (operator called =' open' .
25 Conversely, a dilation folIowed by an erosion causes dark valleys to be filled ioperator called ~ close' )_ ~ Forspecific details see Maragos P, (1987), "Tutorial on Advances in Morphological Image Processlng ana Analysis", ~ Optical Engineering. Vol 26. No. 7.
30 . In image processing systems of the kind to which the present~invention relates, it is often neceSSary to locate the obj ect, eg head or ~ace, within the image prior to processing.
Usually this is achieved by edge detection, but traditional edge detection techniques are purely local - an 35 edge is indicated whenever a gradient of ~mage i-ntensity occurs - and hence will not in general form an edge that is completely closed (ie. forms a loop around the head) but will instead create a number of.. edge segments which together WO 92/02000 2 0 8 7 ~ 2 3 PCl ~GB91~01183 - 7 - :
outline or partly outline the head. Post-proces6ing of some kind is thus usually necessary.
We have found that the adaptive contour model, or ~snake", technique is particularly effective for this purpose.
Preferably, the predetermine=d feature of the image is located by determining parameters of a closed curve arranged to lie adj acent a plurality of edge features of the image, said curve being constrained to exceed a minimum curvature and to have a minimum length compatible therewith. The boundary of the curve may be initially calculatea proximate the edges of the image, and subsequently interactively reduced.
Prior: to a detailed descr~ption of the physical embodiment of the invention, the ' snake' signal processing techniques mentioned above will now be described in greater 1 5 detail.
Introducea by Xass et al [ E~ass m, Witkin A, Terpozopoulus d. " Snakes: Active Contour Models", International Journal of Computer Vision, 321-331, 1988], snakes are a method of attempting to provide some of the post-processing that our own visual system performs. A snake has built into it various properties that are associated with both edges and the human visual system (Eg continuity, smoothness and to some extent the capability to fill in sections of an edge that have been occluded).
A snake is a continuous curve (possibly closed) that attempts to dynamically position itself from a given starting position in such a way that it ' clings~ to e ges in the image.
The form of snake that will be considered here consists of curves that are piecewise polynomial. That is, the curve is in general constructed from N segments {x}(s),yj(s)}i=1,. . . ,N
where each of the xl(s) and yj(s) are polynomials in the parameter s. As the parameter 5 is varied a curve is traced out.
From now on snakes will be referred to as the parametric curve u(s)=(x(s),y(s)) where s is assumed to vary between 0 and 1. What properties should an ~ edge hugging' s nake have ?
WO 92/02000 ~_ . ~ . PCr/GB91/01183 208~23 . - 8 - ~
The snake must be ' driven' by the image. That is, it must be able to detect an edge in the image and align itself with the edge. One way of achieving this is to try to position the snake such that the average ' edge strength' 5 (however that may be measured) along the length of the snake is maximised. If the measure of edge strength is F(x, y) >0 at the image point (x, y) then this amounts to saying that the snake u(s) is to be chosen in such a way that the functional ~F(x(s),y(s))ds . . . ( 1 .~o is maximised. This will ensure that the snake will tend to mould itself to edges in the lmage if it finds them, but does not guarantee that it will find ~hem in the first place.
Given an image, the functional may have many local minima-static problem: finding them is where the ' dynamics' arise.
An edge detector applied to an image will tend to praduce an edge map consisting of mainly thin edges. This means that the edge strength function tends to be zero at most places in the image, apart from on a few lines. As a consequence a snake placed some distance from an edge may not 20 be = attracted towards the edge because the edge strength is effectively zero at the snakes inltial position. To help the snake come under the influence of an edge, the edge image is blurred to broaden the width of the edges.
If an elastic band were held around a convex object and 25 then let go, the band would contract until the object prevented it from doing so further. At this point the band would be moulded to the object, thus describing the boundary.
Two forces are at work here. Firstly that providing the natural tendency of . the band to contract; secondly the 30 opposing force provided by the obj ect. The band contracts because it tries to minimise its elastic energy due to stretching. If the band were described by the parametric curve u(s)=(x(s),y(s)) then the elastic energy at any point s^
is proportional to WO 92/02000 2 ~ 8 7 5 2 3 PCr/GB91/01~83 _ g (~ ~ (~IJ (dsl~) That is, the energy is proportional to the square of how much the curve is being stretched at that point. The el as t i c e ne rgy a l ong i t s e nti re l e ngth, gi ve n t he c ons t rai nt 5 of the object, is minimised. Hence the elastic band assumes the shape of the curve u(s)=(x(s),y~s)) where the u(s) minimises the functional ~((dX)2 (dy)2} ... (2) subj ect to the constraints of the obj ect. We would like 10 closed snakes to have analogous behaviour. That is, to have a tendency to contract, but to be preYented from doing so by the obj ects in an image. To model this behaviour the parametric curve for the snake is chosen so that the functional (2) tends to be minimised. If in addition the forcing term (1) were included then the snake would be prevented from c~ontracting ' through objects~ as it would be attracted toward their edges. The attractive force would also tend to pull the snake into the hollows of a concave boundary, provided that the restoring ' elastic force' was not too great.
One of the properties of the edges that is difficult to model is their behaviour when they can no longer be seen. If one were looking at a car and a person stood in front of it, few would have any difficulty imagining the contours of the edge of the car that were occluded. They would be ' smooth' extensions of the contours either side of the person. I f the above elastic band approach were adopted it would be found that the band formed a stralght line where the car was occluded ~because it tries to minimlse energy, ana~thus length in this situation). If however the band had some stiffness (that is a resistance to bending, as for example displayed by a flexible bar) then it would tend to form a smooth curve in the occluded region of the image and be tangential to the boundaries on either side.
~ ~ 8 7 ~ 2 3 PCI /GB91/01183 -- 10 -- ~ --.
Again a flexible bar tends to form a shape so that its elastic energy is minimised. The elastic energy in bending is dependent on the curvature of the bar, that is the second derivatives. To help force the snake to emulate this type of S behavlour the parametric curve ~(s)=(x(s),y(s)) is chosen so it tends to minimise the functional ~ (d x) + (d y) ~ds . . . (3) which represents a pseudo-bending energy term. Of course, if a snake were made too stiff ~it would be difficult to force it 10 to conform to highly curved boundaries under the action of the f orci ng te rm ( 1 ) .
Three desirable properties of snakes have now been identified To incorporate al+ three into the snake at once the parametri c curve u ( s ) = ~ x ~s ), y ( s ) ) repres enti ng the s nake l 5 i s c hos e n s o th at i t mi ni mi s e s the f unc ti o nal l(x(s),y(s)) =
s~l [(ds2~ ~ds2) ] ~(S)[(ds) + (ds) ] -F(X(s)7y(s))lds (4) Here the terms a(s)>0 and ~(s)> 0 represent respectively the amount of stiffness and ela6ticlty that the snake is to have. It is clear that if the snake approach is 20 to be successful then the correct balance of these parameters is crucial. Too much stiffness and the snake will not correctly hug the boundaries; too much elasticity and closed snakes will be pulled across boundaries and contract to a point or may even break away from boundaries at concave .25 regions. The negative sign In front of the forcing term is because minimising -I F(x, y)ds is equivalent to maximising F(x, y)ds.
As it stands, minimising the functional (4) is trivial.
If the snake is not closed then the solution degenerates into 30 a single point (x~s~,y(s))=constant, where the polnt is chosen to minimise the edge strength F(x(s),y(s)). Physically, this WO 92/02000 2 0 8 7 ~ ~ 3 PCl`/GB9ltO1183 is because the snake will tend to pull its two ena points together in order to minimise the elastic energy, and thus s hri nk t o a s i ngl e p o i nt . The gl obal mi ni mum i s att ai ne d a t the point in the lmage where the edge strength is largest. To 5 prevent this from occurring it is necessary to fix the positions of the ends of the snake in some way. That is, ' boundary conditions' are required. It turns out to be necessary to fi~c more than j ust the location of the end points and two further conditions are required for a well posed 10 problem. A convenient condition is to impose zero curvature at each end point.
Similarly, the global minimum for a closed-loop snake occurs when it contracts to a single point. However, in contrast to an fixed-end snake, additional boundary conditions 15 cannot be applied to eliminate the degenerate solution. The degenerate solution in this case is the true global minimum.
Clearly the ideal situation is to seek a local minimum in the locality of the initial position of the snake. In practice the problem that is solved is weaker than this: find 20 a curve û(s)=(x(s),y(s)) e H [O, 1]xH [O, 1] such that (~( ) ~( ))~ = O; v(s) ~ Ho2[0,1 ] X Ho [0,1 ] ( 5 ) Here ~ [O, 1] denotes the class of real valued functions defined on [O, 1] that have ' finite energy' in the second derivatives (that is the integral of the square of the second 25 derivatives exists [Keller HB. Numerical Methods for Two-Point Boundary Value Problems,Blaisdell, 1968] and Ho[O, 1] is the class of functions in H [O, 1] that are zero at s=O and s=1.
To see how this relates to finding a minimum consider u(s) to be a local minimum and û(s)+~v(s) to be a perturbation about 30 the minimum that satisfies the same boundary conditions (ie v(O~=v(l)==O).
Clearly, considered as a function of ~, I(~)=Iû(s)+~v(s)) is a minimum at~=O. Hence the derivative of I (~) must be zero at ~=0. Equation (5) is therefore a 35 necessary condition for a local minimum. Although solutions to ( 5 ) are not guaranteed to be minima for completely general WO 92/02000 .~ PCI`/GB91/01183 2087~23 - 12- ~
edge strength function6, it has been found in practice that solutions are indeed minima.
Standard arguments in the calculus of variations show that problem (5) is equivalent to another problem, which is 5 simpler to solve. find a curve (x~s),y~s~) e C [0,1]xC [0,1]
that satisfies the pair of fourth o~der ordinary differential e quati ons ~
dsZ{ ds2 } ds { ds } 2 ~x ~ . ( 6 ) -d Z{~(S) d g} + d {~s~ d~} + 2 - ¦ = (7) together with the boundary conditions x(O),y(O),x(1),y(1) given, and ds l.~ dsZ ~ dsZ l,l ds2 ~ ( 7 ) The statement of the problem is for the case of a fixed-end snake, but if the snake is to form a closed loop then the boundary conditions above are replaced by periodicity 15 conditions. Both of these problems can easily be solved using finite differences.
The finite difference approach starts by discretising the interval [0, 1] into N-1 equispaced subintervals of length h=1\ (N-1 ) and defines a set of nodes {( i)}~ l where si=(i-l)h The method seeks a set of approximations ~(Xi-Yl)} ~ to {(x(s~),y(S~ N
by replacing the differential equations ~6) and (7) in the continuous variables with a set of difference equations ln the 25 discrete variables [ Keller HB., ibid. ]. Replacing the derivatives in ~6~ by difference ap~roximations at the point s; gives WO 9Z/fJ2000 2 0 8 7 5 2 3 Pcr~GB9l/ol .83 h2{ai~l h2 _ 2a~ ) + a (Xj-2XI l+xi 2) }
+ h--{~i l i h ~~ i h ~1 } + 2 a ¦ = ; for i=3,4...,N-2 ... (9) where aj=atsi) and ~ (Si) Similarly a difference approximation to (7) may be derived. Note that the difference e~auation only holds at internal modes in the interval where 5 the indices referenced lie in the range I to N. Collecting like terms together, ( 9 ) can be written as aixj 2 + bjxj l + CiXi + diXi l l + eiXi ,2 =fi where aj=- i~
b' = i + i-l + i h4 h4 h2 C.=_~ill +4ai+ ~
~ h4 h4 h4 h2 h2J
d.= i,l + i *
h4 h4 h2 ei=~ i~
f 1 ~F¦
2 ~x l(s ~;) Discretislng both the differential equations ~6) and 10 (7) and taking boundary conditions into account, the finite difference approximations _={x;} and y=lY;} to {xl} and {y;) }, respectively, satisfy the following system of the algebraic e :auations 15 K_=f (2~, Y), Kv=g(x, Y) . . . (10) wo 92/02000 2 0 8 7 ~ 2 3 pcr/GB9l/o~
-- 1 4 -- =
The structure of the matrices R and the right hand vectors f and g are different depending on whether closed or open snakQ boundary conditions are used. If the snake is closed then fictitious nodes at SO, S_1, SN+1 and SN+2 are 5 introduced and +he aifference equation (9) is applied at nodes 0, 1, N-1 and N.
P~r; o~1~; ty implies that5, xO=xN~ X_l=XN_I, XN+I=XI and x2=xN+2. With these conditions in force the coefficient matrix becomes 'cl dl el ai b, `
b2 c2 d2 e~ a2 a3 b3 c3 d3 e3 a4 b4 c4 d4 e4 K-eN_l aN-I bN-I CN-I dN-I
dN eN aN bN CN
10 ~ ' and the right hand side vQctor is ~f1, f-, ~ fN) ~ ~
For fixea-end snakes ficti~ious nodQs at~SO and SN+I are introduced and the difference equation (9) is appLied at nodes 1~ Sl ~nd SN+I. Two extra difference equations are introauced to approxlmate the zero curvature boundary c~onditions-d2x ¦ d2x ¦
ds2 Is ds2 Is namely xo-2x1+x2=0 and x_l-2xN+xN+1=0. The coefficient matrix is now WO 92/02000 2 0 8 7 5 2 3 PCr~GB9~Jo~ 183 /(C2-a2) d~
b3 C3 d3 e3 a4 b4 C4 d4 e4 K= a5 b5 ~5 5 C5 aN 2 bN 2 CN_~ dN-2 aN_ I bN- I (CN_ I eN~
and the right hand side vector is (f2-12a2+b2)Xl, f3-a3xl, fN-3~ fN-2eN_2XN/ fN_1-~2eN_l+dN_l)xN) The right hand side vector for the difference equations corresponding to (7) is derived in a similar fashion.
The system ( 10) represents a set of non-linear equations that has to be solved. The coefficient matrix is symmetric and ~ositive definite, and banded for the fixed-end snake. For a closed-loop snake ~7ith periodic boundary conditions it is banded, apart from a few off-diagonal entries. As the system is non-linear it is solved iteratively. The iteration performed is K2~n~l f(~n~Yn) for n=0,1,2,...
+KYn.l= g(~n~Yn) for n=0,1,2,...
where ~>0 is a stabilisation parameter. This can be rewritten 15 as y )Xnll ~Xn+f(X~ Yn) for n=o~l~2~ -y l~Yn ~ ~ = y Yn +g(Xn~Yn) for n=0,1,2,---W092/02000 2~8~23 r = ~ PCI/GB9l/0l183 This systçm has to be solved for :each n. For a closed-loop snake the matrix on the =left hand side i8 dlfficult to invert directly because the terms that are outside the main diagonal band destroy the band structure. In general, the 5 coefficient matrix R can be split into the sum of a banded matrix B plus a non-banded matrix A; R=A+B. For a fixed-end snake the matrix A would be zero. The system of equations is now solved for each n by performing the iteration (B+~ Ax(k)l +--x +f(x ,y ) for k=0,1,2,...
(B+ 1 I)y(~ll) Ay(~) + 1y +S~(~ ) f k O 1 2 The matrix (~ + I/y) is a band matrix and can be ~ 10 expres5ed a5 a product of Cholesky factors LL ¦ Johnson Reiss, icid. ¦. The systems are solved at each stage ~y first solving Li~ = -A~ + y~n +f(Xn~Yn) Ly(~l~) = -Ayc~), + yYn +~(~nlYn) f ol l owed by 15 L~,,)=~
L T (~
Y": I Yn 1 l ,, Notice that the Cholesky decomposition only has to be performed once.
Model-based coding schemes use 2-D or 3-D models of 20 scene objects in order to reduce the redundancy in the information needed to encode a moving secuence of images. The location and tracking of moving obiec.+s is of fundamental importance of this~ Videoconference and videophone type scenes may present difficul+les for conventional machine WO 9~102000 2 0 8 7 5 2 3 PCI/GB91/01183 vision algorithms as there can often be low contrast and ~ fuzzy' moving boundaries between a pcrson' s hair and the background. Adaptive contour models or ' snakes' form a class of technigues which are able to locate and track object 5 boundaries; they can fit themselves to low contrast boundaries and can fill in across boundary segments between which there is little or no local evidence for an edge. This paper discusses the use of snakes for isolating the head boundary in images as well as a technigue which combines block motion 10 estimation and the snake: the ' power-assisted' snake.
The snake is a continuous curve (possibly closed) which attempts to dynamically position itse1f irom a given starting position in such a way that it clings to edges in the image.
Full details of the implementation for both closed and ' fixed-15 end' snakes are given in Waite JB, Welsh WJ, " Head BoundaryI.ocation using Snakes", British Telecom Technology Journal, Vol 8, No 3, ,July 1990, which describes two alternative implementation strategies: finite elements and finite differences. We implemented both closed and fixed-end snakes 20 using finite differences. The snake is initialised around the periphery of a head-and-shoulders image and allowed to contract under its own internal elastic force. It is also acted on by forces derived from the image which are generated by first processing the image using a ~aplacian-type operator 25 with a large space constant the output of which is rectified and modified using a smooth non-linear function. The rectification results in isolating the ' valley~ features which have been shown to correspond to the subj ectively important boundaries in facial images; the non-linear function 30 effectively reduces the weighting of strong edges relative to weaker edges in order to give the weaker boundQries a better chance to influence the snake. After about 200 i~:erations of the snake it reaches the position hugging the boundary of the head. In a second example, ~a fixed-end snake with its end 35 points at the bottom corners of :the image was allowed to contract in from the sides and top of the image. The snake stabilises on the boundar.y between hair znd background although this is a relatively low-contrast boundary in the WO 92/02000 ` = ~ . PCT/GB91~1183 2~8~23 - 18- ~
image. As the snake would face problems trying to contract across a patterned background, it might be better to derive the image forces from a moving edge detector.
In Kass et al, ibid., an example is shown of snakes 5 being used to track the moving lips of a person. First, the snake is stabilised on the lips ln the first frame of a moving sequence of image6; in the second frame i~ is initialised in the position corresponding to its stable position in the previous frame and allowed to achieve equilibrium again.
10 There is a clear problem with the technique in this form in that if the motion is too great between frames, the snake may lock on to different features in the next frame and thus lose track. Kass suggests a remedy using the principle of ' scale-space continuation': the snake is allowed to stabilise first 15 on an image which has been smoothed using a Gaussian filter with a large space constant; this has the effect of pulling the snake in from a large distance. After equilibrium has occurred, the snake is presented with a new set of image forces derived by using a Gaussian with slightly smaller space 20 constant and the process lS continued until equilibrium has occurred in the image at the highest level of resolution possible.
This is clearly a computationally expensive process; a radically simpler technique has been developed and found to 25 work welL and this will now be described. After the snake has reached equilibrium in the first irame of a sequence, block motion estimation is carried out at the positions of the snake nodes (the snake is conventionally implemented by approximating it with a set of discrete nodes - 24 in our 30 impIementation). The motion estimation is performed from one frame into the next frame which is the opposite sense to that conventionally performed during motion compensation for video coding. If the best match positions for the blocks are plotted in the next frame then, due to the ' aperture problem~, 35 a good match can often be found at a range of points along a ooundary segment which is longer than the side of the block oeing matched. The effect is to ~roduce a non-uniform spacing of the points. The snake is then initialised in the WO 92/02000 2 0 8 7 5 2 3 _PCI/GB91/01183 -- 1 9 -- - ~
frame with its nodes atthe positions of the best match block positions and allowed to run for a few iterations, typically ten. The result is the node~ are now uniformly distributed along the boundary; the snake has successfully tracked the 5 boundary, the block motion estimation having acted as a sort of ' power-assist' which will ensure tracking is maintained as long as the overall motion is not greater than the maximum displacement of t~e block search. As the motion estimation is performed at only a small set of points, the computation time 10 is not increased significantly;
Both fixed-end and closed snakes have been shown to perform object boundary location even in situations where they may be a low contrast between the obj ect and its background.
A composite technique using both block motion 15 estimation and snake fitting has been shown to perform boundary tracking in a sequence of moving images. The technique is simpler to implement than an equivalent coarse-to-fine resolution technique. The methods described in the paper have so far been tested in images where the obj ect 20 boundaries do~not have very great or discontinuous curvature at any point; if these conditions are not met, the snake would fail to conform itself correctly to= the boundary contour. One solution, currently being pursued, is to effectively split the boundaries into a number of shorter 25 segments and fit these segments with several fixed-end snakes.
According to a second aspect of the present invention a method of -~erifying the identity of the user of a data carrier comprises: generating a digital facial image of the user; receiving the data carrier and reading therefrom 30 identification data; performing the method of the first aspect of the present~invention; comparing each feature vector, or data derived therefrom, wit~ the identification data; and generating a verification signal ln depenaence upon the comparison.
According to a yet fur~her aspect of the present invention apparatus for verifyirig the identity of the user= of a data carrier comprises: means for generating a digital facial image of the user; means for receiving the data carrier WO 92/02000 ~ PCI`/GB91/01183 2~87~23 20-and reading therefrom identification data; means for performing the method of the first aspect of the present invention; and means for comparing each feature vector, or data derived therefrom, with the identification data and S generating a verificatiQn signal ~In aepenaence upon the comparison.
An embodiment of the invention will now be described, by way of example only, with reference to the accompanying drawings in which: ~
Figure 1 is flow diagram of the calculation of a feature vector;
Figure 2 illustrates ap~aratus for credit verification using the method of.the pres~ent invention; and Figure. 3~ ~LLlustrates~ ~ the method of the present 15 i nventi on.
Referring to Pigure l, an overview of an embodiment incorporating both aspects of the invention will be described.
An image of~ the human face is captured in some manner - for example by using a video camera or~ by a photograph - and 20 digitised to provide an array of pixel Yalues. A head detection algorithm is employed to locate the position within the array of the face or head. This head location stage may comprise one of several known methods but is preferably a method using the above described "snake" techniques. Pixel 25 data lying outside the boundaries thus determined are ignored.
The second step is carried out on the pixel data lying inside the boundaries to locate the features to be used for recognition - typically the eyes and mouth. Again, several location techniques for finding the position of the eyes and 30 mouth are known from the prior art, but preferably a two stage process of coarse location followed by fine location is employed. ~ The coarse location technique might, = for example, be. that described in US 4841575.
The fine location t~echnique preferably uses the 35 deformable template .echnique described by Yuille et al ~ Feature Extraction from Faces using Deformable Templates", Harvard Robotics Lab Technical Report Number; 88/2 published in Computer Vision and Pattern Recognition, June 1989 IEEE.
. _ _ _ _ _ _ ~ _ _ ~ . . .... .
wo 92/n2000 2 0 8 7 ~ 2 3 PCI`/GB91/01183 In this technigue, which has been described above, a line model topologically equivalent to the feature is positioned (by the coar~e location technique) near the feature and is iteratively moved and deformed until the best fit is obtained.
s The feature is identified as being at this position.
Next, the shape of the feature is changed until it assumes a standard, topologically equivalent, shape. If the fine location technique utilised deformable templates as disclosed above, then the deformation of the feature can be 10 ~achieved to some extent by reversing t~e deformation of the template to match the feature to the i~itial, standard, shape of the template.
Since the exact position of the feature is now known, and its exact shape is specified, recognition using this 15 information can be employed as identifiers of the image supplementary to the recognition process using the feature vector of the feature. All image data outside the region identified as being the feature are ignored and the image data identified as being the feature are resolved into its 20 orthogonal eigen-picture components corresponding to that feature. The component vector is then compared with a component vector corresponding to a given person to be identified and, in the event of substantial similarity, recognition may be indicated.
Referring to Figure 2, an ~mho~; ment of the invention suitable for credit card verification ~ill now be described.
A video camera 1 receives an image of a prospective user of a credit card terminaI. Upon entry of the card to a card entry device 2, the analogue output of the video camera 1 is digitised by an AD converter 3, and sequentially clocked into a framestore 4. A video processor 5 (for example, a suitable processed digital signal processing chip such as that AT&T DSP 20 ) is connected to access the framestore 4 and processes the digital image therein to form and edge-enhanced 3~ image. One method of doing this is simply to subtract each sample from its predecessor to form a difference picture, but a better method involves the use of a Laplacian type of operator, the output of which is modified by a sigmoidal function which suppre6ses small levels of activity due to noise as well as very strong edges whilst leaving intermediate values barely changed. 3y this means, a smoother ed~e image is generated, and weak edge contours such as those around the 5 line o~ the chin are enhanced. This edge picture is stored in ~n edge picture frame~buffer 6. The processor then executas a closed loop snake method, u6ing finite differences, to derive a boundary which encompasses the head. Once the snake algorithm has converged, the position of the boundaries of the 10 head in the edge image and hence the corresponding image in the frame store 4 is now in force. ~ ~
The edge image in thé framestore 6 is then processed to derive a coarse approximation to the location of the features of interest - typically the eyes and the mouth. The method of 15 Nagao is one suitable technique (Nagoa M, " Picture Recognition and Data Structure", Graphic Languages - Ed. Rosenf~eld) as described in our earlier application EP0225729. The estimates o f p os i ti o n thus~ de ri ved a re us e d as s t arti ng pos i ti ons f o r the dynamic template process which establishes the exact 20 feature position.
Accordingly, pr=ocessor 5 em~loys~ the method described in Yuille et al [Yuille A, Cohen D, Hallinan P, ~1988~, ~ Facial Feature Extraction by Deformable Templates", ~Harvard Robotics Lab. Technical Report no. 88-2] to derive position 25 data for~each feature which consists of a size (or resolution) and a series of~point coordinates given as fractions of the total size of the template. Certain of these points are designated as key,ooints: which are always internal to the template, the other points always being edge points.~ These 30 key poin~ position data are stored, and may also be used as recognition indicia. This is indicated in Figure 3.:
Next, the geometrical transformation of the feature to the standard shape is performed by the processor S_ This transformation takes the form of a mapping between triangular 35 facets of the regions and the templates. The facets=consist of local collections of three points and are defined in the template definition files. The mapping is formed by considering the x, y values of :template vertices with each of WO 92/02000 2 0 ~ 7 ~ 2 ~ -P~iGB91~oll83 the x', y' values c-` the corresponding region vertices - this yields two plane equations from which each x', y' point can be calculated given any x, y within the template facet, and thus the image data can be mapped from the region sub-image.
The entire template sub-image is obtained by rendering (or scan-converting) each con6tituent facet pel by pel, taking the pel' s value from the corresponding mapped location in the equivalent region' 5 sub-image.
The proce6sor 5 is arranged to perform the mapping of the extracted region sub-images to their corresponding generic template size and shape. The keypoints on the regions form a triangular mesh with a corresponding mesh defined for the generic template shape; mappings are then formed from each triangle in the generic mesh to its equivalent in the region 15 mesh. The distorted sub-images are then created and stored in the template data structures ~or later display.
The central procedure in this module i5 the ' template stretching' procedure. This routine creates each distorted template sub-image facet by facet (each facet is defined by 20 three connected template points ). A mapping is obtained from each template facet to the corresponding region facet and then the template facet is fillea In pel: by pel with image data mapped from the region sub-image. After all facets have been processed in this way the distorted template sub-image will 25 have been completely filled in with image data.
The standardised feature image thus produced is then stored in a feature image buffer 7. An eigen-picture buffer 8 which contains a plurality (for example 50) of eigen-pictures of irLcreasing sequency which have previously been 30 derived in known manner from a representative population (preferably using an equivalent geometric normalisation t e chni que t o that d_ s c l os ed abo ve ) . A tra ns f o rm p ro ce s s or 9~which may in practice be realised as processor 5 acting under suitable stored instructions ) derives the co-ordinates or 35 components of the feature image with regard to each eigen-picture, to give a vector of 50 numbers, using the method described above. The card entry device 2 reads from the inserted credit card the 50 components which characterise the . _ _ _ , .. ... . . . _ . . . . . , . . , _ _ _ _ _ w092~02000 2087 ~23 ~ PC~/GB91/01183 - 24 - ~
correct user of that card, which are input to a comparator 10 (which may again in practice be realised as part of a single processing device) which measures the distance in pattern space between the two connectors. The preferred metric is the 5 Euclidian distance, although other distance metrics (eg. " a city block" metric) could equally be used. If this distance is less than a predetermined threshold, correct recognition is indicated to an output lf;- otherwise, recognition failure i8 si gnalled.
Cther data may also be incorporated into the recognition process; for example, data derived during template deformation, or head measurements (e. g. the ratio of head height to head width derived during the head location stage) or the feature position data as mentioned above. Recognition 15 results may be combined in the manner indicated in our earlier application GB9OQ5190. S.
Generally, some preprocessing of the image is provided (indicated schematically as 12 in Figure 2); for example, noise filtering (spatial or temporal) and brightness or 20 contrast normalisation.
Variations in lighting can produce a spatially variant effect on the image brightness due to shadowing by the brows etc. It may be desirable to further pre-process the images to remove most of this variation ~by using a second derivative 25 operator or morphological filter in place of the raw image data currently used. A blurring filter would probably also be requi red.
It might also be desirable to reduce the effects of variations in geometric normalisation on the representation 30 vectors. This could be accomplished by using low-pass ~iltered images throughout which should give more stable representations for recognition purp~ses.
Claims (14)
EXCLUSIVE PROPERTY OR PRIVILEGE IS CLAIMED ARE DEFINED
AS FOLLOWS:
1. A method of processing an image comprising the steps of:
locating within the image the position of at least one predetermined feature;
extracting image data from said image representing each said feature; and calculating for each feature a feature vector representing the position of the image data of the feature in an N-dimensional space, said space being defined by a plurality of reference vectors each of which is an eigenvector of a training set of images of like features;
characterised in that the method further comprises the step of:
modifying the image data of each feature to normalise the shape of each feature thereby to reduce its deviation from a predetermined standard shape of said feature, which step is carried out before calculating the corresponding feature vector.
locating within the image the position of at least one predetermined feature;
extracting image data from said image representing each said feature; and calculating for each feature a feature vector representing the position of the image data of the feature in an N-dimensional space, said space being defined by a plurality of reference vectors each of which is an eigenvector of a training set of images of like features;
characterised in that the method further comprises the step of:
modifying the image data of each feature to normalise the shape of each feature thereby to reduce its deviation from a predetermined standard shape of said feature, which step is carried out before calculating the corresponding feature vector.
2. A method according to Claim 1, wherein the step of modifying the image data of each feature involves the use of a deformable template technique.
3. A method according to Claim 1, wherein the step of locating within the image the position of at least one predetermined feature employs a first technique to provide a coarse estimation of position and a second, different, technique to improve upon the coarse estimation.
4. A method according to Claim 3 wherein the second technique involves the use of a deformable template technique.
5. A method according to Claims 1, 2, 3 or 4 in which the training set of images of like features are modified to normalise the shape of each of the training set of images thereby to reduce their deviation from a predetermined standard shape of said feature, which step is carried out before calculating the eigenvectors of the training set of images.
6. A method according to Claim 5 comprising locating a portion of the image by determining parameters of a closed curve arranged to lie adjacent a plurality of edge features of the image, said curve being constrained to exceed a minimum curvature and to have a minimum length compatible therewith.
7. A method as claimed in Claim 6 in which the boundary of the curve is initially calculated proximate the edges of the image, and subsequently interactively reduced.
8. A method according to Claim 6 in which the portion of the image is a face or a head of a person.
9. A method according to Claim 8 further comprising determined the position of each feature within the image.
10. A method of verifying the identity of the user of a data carrier comprising:
generating a digital facial image of the user;
receiving the data carrier and reading therefrom identification data;
performing the method of any one of Claims 1 to 4, or 6 to 9;
comparing each feature vector, or data derived therefrom, with the identification data; and generating a verification signal in dependence upon the comparison.
generating a digital facial image of the user;
receiving the data carrier and reading therefrom identification data;
performing the method of any one of Claims 1 to 4, or 6 to 9;
comparing each feature vector, or data derived therefrom, with the identification data; and generating a verification signal in dependence upon the comparison.
11. A method of verifying the identity of the user of a data carrier comprising:
generating a digital facial image of the user;
receiving the data carrier and reading therefrom identification data;
performing the method of Claim 5;
comparing each feature vector, or data derived therefrom, with the identification data; and generating a verification signal in dependence upon the comparison.
generating a digital facial image of the user;
receiving the data carrier and reading therefrom identification data;
performing the method of Claim 5;
comparing each feature vector, or data derived therefrom, with the identification data; and generating a verification signal in dependence upon the comparison.
12. Apparatus for verifying the identity of the user of a data carrier comprising:
means for generating a digital facial image of the user;
means for receiving the data carrier and reading therefrom identification data;
means for performing the method of any one of Claims 1 to 4, or 6 to 9; and means for comparing each feature vector, or data derived therefrom, with the identification data and generating a verification signal in dependence upon the comparison.
means for generating a digital facial image of the user;
means for receiving the data carrier and reading therefrom identification data;
means for performing the method of any one of Claims 1 to 4, or 6 to 9; and means for comparing each feature vector, or data derived therefrom, with the identification data and generating a verification signal in dependence upon the comparison.
13. Apparatus for verifying the identity of the user of a data carrier comprising:
means for generating a digital facial image of the user;
means for receiving the data carrier and reading therefrom identification data;
means for performing the method of Claim 5;
and means for comparing each feature vector, or data derived therefrom, with the identification data and generating a verification signal in dependence upon the comparison.
means for generating a digital facial image of the user;
means for receiving the data carrier and reading therefrom identification data;
means for performing the method of Claim 5;
and means for comparing each feature vector, or data derived therefrom, with the identification data and generating a verification signal in dependence upon the comparison.
14. Apparatus as claimed in Claims 12 or 13 in which the means for generating a digital facial image of the user comprises a video camera the output of which is connected to an AD convertor.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB9015694.4 | 1990-07-17 | ||
GB909015694A GB9015694D0 (en) | 1990-07-17 | 1990-07-17 | Face processing |
GB909019827A GB9019827D0 (en) | 1990-09-11 | 1990-09-11 | Object location and tracking using a'power-assisted'snake |
GB9019827.6 | 1990-09-11 |
Publications (1)
Publication Number | Publication Date |
---|---|
CA2087523C true CA2087523C (en) | 1997-04-15 |
Family
ID=26297337
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002087523A Expired - Lifetime CA2087523C (en) | 1990-07-17 | 1991-07-17 | Method of processing an image |
Country Status (6)
Country | Link |
---|---|
US (1) | US5719951A (en) |
EP (1) | EP0539439B1 (en) |
JP (1) | JP3040466B2 (en) |
CA (1) | CA2087523C (en) |
DE (1) | DE69131350T2 (en) |
WO (1) | WO1992002000A1 (en) |
Families Citing this family (161)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5450504A (en) * | 1992-05-19 | 1995-09-12 | Calia; James | Method for finding a most likely matching of a target facial image in a data base of facial images |
US5841470A (en) * | 1992-06-29 | 1998-11-24 | British Telecommunications Public Limited Company | Coding and decoding video signals |
JP3252381B2 (en) * | 1992-09-08 | 2002-02-04 | ソニー株式会社 | Pattern recognition device |
NO933205D0 (en) * | 1993-09-08 | 1993-09-08 | Harald Martens | Data representation system |
US5983251A (en) * | 1993-09-08 | 1999-11-09 | Idt, Inc. | Method and apparatus for data analysis |
US5574573A (en) * | 1993-10-29 | 1996-11-12 | Eastman Kodak Company | Compression method for a standardized image library |
US5512939A (en) * | 1994-04-06 | 1996-04-30 | At&T Corp. | Low bit rate audio-visual communication system having integrated perceptual speech and video coding |
CA2145914A1 (en) * | 1994-05-27 | 1995-11-28 | Alexandros Eleftheriadis | Model-assisted coding of video sequences at low bit rates |
ZA959492B (en) * | 1994-12-21 | 1996-07-10 | Eastman Kodak Co | Method and apparatus for the formation of standardised image templates |
ZA959491B (en) * | 1994-12-21 | 1996-06-29 | Eastman Kodak Co | Method for compressing and decompressing standardized portait images |
US5727089A (en) * | 1995-01-05 | 1998-03-10 | Eastman Kodak Company | Method and apparatus for multiple quality transaction card images |
US6172706B1 (en) * | 1995-04-19 | 2001-01-09 | Canon Kabushiki Kaisha | Video camera with automatic zoom adjustment based on distance between user's eyes |
JP3735893B2 (en) * | 1995-06-22 | 2006-01-18 | セイコーエプソン株式会社 | Face image processing method and face image processing apparatus |
KR0160711B1 (en) * | 1995-07-20 | 1999-05-01 | 김광호 | Method and apparatus for reproducing still image of camcorder |
US5887081A (en) * | 1995-12-07 | 1999-03-23 | Ncr Corporation | Method for fast image identification and categorization of multimedia data |
DE19610066C1 (en) * | 1996-03-14 | 1997-09-18 | Siemens Nixdorf Advanced Techn | Process for the collection of face-related personal data and their use for the identification or verification of persons |
JP3279913B2 (en) * | 1996-03-18 | 2002-04-30 | 株式会社東芝 | Person authentication device, feature point extraction device, and feature point extraction method |
US6188776B1 (en) * | 1996-05-21 | 2001-02-13 | Interval Research Corporation | Principle component analysis of images for the automatic location of control points |
US5920644A (en) * | 1996-06-06 | 1999-07-06 | Fujitsu Limited | Apparatus and method of recognizing pattern through feature selection by projecting feature vector on partial eigenspace |
JP3529954B2 (en) * | 1996-09-05 | 2004-05-24 | 株式会社資生堂 | Face classification method and face map |
JP2918499B2 (en) * | 1996-09-17 | 1999-07-12 | 株式会社エイ・ティ・アール人間情報通信研究所 | Face image information conversion method and face image information conversion device |
US6044168A (en) * | 1996-11-25 | 2000-03-28 | Texas Instruments Incorporated | Model based faced coding and decoding using feature detection and eigenface coding |
US6185337B1 (en) * | 1996-12-17 | 2001-02-06 | Honda Giken Kogyo Kabushiki Kaisha | System and method for image recognition |
JP3912834B2 (en) | 1997-03-06 | 2007-05-09 | 有限会社開発顧問室 | Face image correction method, makeup simulation method, makeup method, makeup support apparatus, and foundation transfer film |
US6330354B1 (en) * | 1997-05-01 | 2001-12-11 | International Business Machines Corporation | Method of analyzing visual inspection image data to find defects on a device |
US6381345B1 (en) * | 1997-06-03 | 2002-04-30 | At&T Corp. | Method and apparatus for detecting eye location in an image |
US6016148A (en) * | 1997-06-06 | 2000-01-18 | Digital Equipment Corporation | Automated mapping of facial images to animation wireframes topologies |
US6195445B1 (en) * | 1997-06-30 | 2001-02-27 | Siemens Corporate Research, Inc. | Motion compensation of an image sequence using optimal polyline tracking |
US6151403A (en) * | 1997-08-29 | 2000-11-21 | Eastman Kodak Company | Method for automatic detection of human eyes in digital images |
US6252976B1 (en) * | 1997-08-29 | 2001-06-26 | Eastman Kodak Company | Computer program product for redeye detection |
US7630006B2 (en) | 1997-10-09 | 2009-12-08 | Fotonation Ireland Limited | Detecting red eye filter and apparatus using meta-data |
US7042505B1 (en) | 1997-10-09 | 2006-05-09 | Fotonation Ireland Ltd. | Red-eye filter method and apparatus |
US7738015B2 (en) * | 1997-10-09 | 2010-06-15 | Fotonation Vision Limited | Red-eye filter method and apparatus |
US6035055A (en) * | 1997-11-03 | 2000-03-07 | Hewlett-Packard Company | Digital image management system in a distributed data access network system |
US6539103B1 (en) * | 1997-11-12 | 2003-03-25 | The University Of Utah | Method and apparatus for image reconstruction using a knowledge set |
US6148092A (en) * | 1998-01-08 | 2000-11-14 | Sharp Laboratories Of America, Inc | System for detecting skin-tone regions within an image |
US9292111B2 (en) | 1998-01-26 | 2016-03-22 | Apple Inc. | Gesturing with a multipoint sensing device |
US8479122B2 (en) | 2004-07-30 | 2013-07-02 | Apple Inc. | Gestures for touch sensitive input devices |
US7614008B2 (en) | 2004-07-30 | 2009-11-03 | Apple Inc. | Operation of a computer with touch screen interface |
KR100595922B1 (en) | 1998-01-26 | 2006-07-05 | 웨인 웨스터만 | Method and apparatus for integrating manual input |
JP3271750B2 (en) * | 1998-03-05 | 2002-04-08 | 沖電気工業株式会社 | Iris identification code extraction method and device, iris recognition method and device, data encryption device |
DE19810792A1 (en) | 1998-03-12 | 1999-09-16 | Zentrum Fuer Neuroinformatik G | Personal identity verification method for access control e.g. for automatic banking machine |
JP3657769B2 (en) * | 1998-03-19 | 2005-06-08 | 富士写真フイルム株式会社 | Image processing method and image processing apparatus |
GB2336735A (en) * | 1998-04-24 | 1999-10-27 | Daewoo Electronics Co Ltd | Face image mapping |
US6215471B1 (en) | 1998-04-28 | 2001-04-10 | Deluca Michael Joseph | Vision pointer method and apparatus |
US6064354A (en) | 1998-07-01 | 2000-05-16 | Deluca; Michael Joseph | Stereoscopic user interface method and apparatus |
US6292575B1 (en) | 1998-07-20 | 2001-09-18 | Lau Technologies | Real-time facial recognition and verification system |
JP3725784B2 (en) * | 1998-08-07 | 2005-12-14 | コリア インスティテュート オブ サイエンス アンド テクノロジー | Apparatus and method for detecting moving object in color frame image sequence |
US6714661B2 (en) * | 1998-11-06 | 2004-03-30 | Nevengineering, Inc. | Method and system for customizing facial feature tracking using precise landmark finding on a neutral face image |
DE19901881A1 (en) * | 1999-01-19 | 2000-07-27 | Dcs Dialog Communication Syste | Falsification protection method for biometric identification process for people using multiple data samples, |
US6917692B1 (en) * | 1999-05-25 | 2005-07-12 | Thomson Licensing S.A. | Kalman tracking of color objects |
DE69932643T2 (en) | 1999-12-07 | 2007-04-05 | Sun Microsystems, Inc., Santa Clara | IDENTIFICATION DEVICE WITH SECURED PHOTO, AND METHOD AND METHOD FOR AUTHENTICATING THIS IDENTIFICATION DEVICE |
JP2001266158A (en) * | 2000-01-11 | 2001-09-28 | Canon Inc | Image processor, image processing system, image processing method and storage medium |
US6940545B1 (en) | 2000-02-28 | 2005-09-06 | Eastman Kodak Company | Face detecting camera and method |
US6792144B1 (en) * | 2000-03-03 | 2004-09-14 | Koninklijke Philips Electronics N.V. | System and method for locating an object in an image using models |
US6597736B1 (en) * | 2000-03-29 | 2003-07-22 | Cisco Technology, Inc. | Throughput enhanced video communication |
DE10017551C2 (en) * | 2000-04-08 | 2002-10-24 | Carl Zeiss Vision Gmbh | Process for cyclic, interactive image analysis and computer system and computer program for executing the process |
US7245766B2 (en) * | 2000-05-04 | 2007-07-17 | International Business Machines Corporation | Method and apparatus for determining a region in an image based on a user input |
AUPQ896000A0 (en) * | 2000-07-24 | 2000-08-17 | Seeing Machines Pty Ltd | Facial image processing system |
US7079158B2 (en) * | 2000-08-31 | 2006-07-18 | Beautyriot.Com, Inc. | Virtual makeover system and method |
US20020081003A1 (en) * | 2000-12-27 | 2002-06-27 | Sobol Robert E. | System and method for automatically enhancing graphical images |
US6961465B2 (en) * | 2000-12-28 | 2005-11-01 | Canon Kabushiki Kaisha | System and method for efficient determination of recognition initial conditions |
EP1229486A1 (en) * | 2001-01-31 | 2002-08-07 | GRETAG IMAGING Trading AG | Automatic image pattern detection |
EP1250005A1 (en) * | 2001-04-12 | 2002-10-16 | BRITISH TELECOMMUNICATIONS public limited company | Video communication with feedback of the caller's position relative to the camera |
US7027620B2 (en) * | 2001-06-07 | 2006-04-11 | Sony Corporation | Method of recognizing partially occluded and/or imprecisely localized faces |
WO2003010728A1 (en) * | 2001-07-24 | 2003-02-06 | Koninklijke Kpn N.V. | Method and system and data source for processing of image data |
AUPR676201A0 (en) * | 2001-08-01 | 2001-08-23 | Canon Kabushiki Kaisha | Video feature tracking with loss-of-track detection |
US7054468B2 (en) * | 2001-12-03 | 2006-05-30 | Honda Motor Co., Ltd. | Face recognition using kernel fisherfaces |
US7123783B2 (en) * | 2002-01-18 | 2006-10-17 | Arizona State University | Face classification using curvature-based multi-scale morphology |
US20070261077A1 (en) * | 2006-05-08 | 2007-11-08 | Gary Zalewski | Using audio/visual environment to select ads on game platform |
US20070061413A1 (en) * | 2005-09-15 | 2007-03-15 | Larsen Eric J | System and method for obtaining user information from voices |
US20070260517A1 (en) * | 2006-05-08 | 2007-11-08 | Gary Zalewski | Profile detection |
US20040064709A1 (en) * | 2002-09-30 | 2004-04-01 | Heath James G. | Security apparatus and method |
AU2002362145A1 (en) * | 2002-12-11 | 2004-06-30 | Nielsen Media Research, Inc. | Detecting a composition of an audience |
US7203338B2 (en) * | 2002-12-11 | 2007-04-10 | Nielsen Media Research, Inc. | Methods and apparatus to count people appearing in an image |
US7254275B2 (en) * | 2002-12-17 | 2007-08-07 | Symbol Technologies, Inc. | Method and system for image compression using image symmetry |
JP4406547B2 (en) * | 2003-03-03 | 2010-01-27 | 富士フイルム株式会社 | ID card creation device, ID card, face authentication terminal device, face authentication device and system |
US8170294B2 (en) * | 2006-11-10 | 2012-05-01 | DigitalOptics Corporation Europe Limited | Method of detecting redeye in a digital image |
US7151969B2 (en) * | 2003-06-26 | 2006-12-19 | International Business Machines Corporation | Administering devices in dependence upon user metric vectors with optimizing metric action lists |
US7689009B2 (en) * | 2005-11-18 | 2010-03-30 | Fotonation Vision Ltd. | Two stage detection for photographic eye artifacts |
US7970182B2 (en) | 2005-11-18 | 2011-06-28 | Tessera Technologies Ireland Limited | Two stage detection for photographic eye artifacts |
US7920723B2 (en) * | 2005-11-18 | 2011-04-05 | Tessera Technologies Ireland Limited | Two stage detection for photographic eye artifacts |
US7574016B2 (en) | 2003-06-26 | 2009-08-11 | Fotonation Vision Limited | Digital image processing using face detection information |
US8254674B2 (en) * | 2004-10-28 | 2012-08-28 | DigitalOptics Corporation Europe Limited | Analyzing partial face regions for red-eye detection in acquired digital images |
US7792970B2 (en) * | 2005-06-17 | 2010-09-07 | Fotonation Vision Limited | Method for establishing a paired connection between media devices |
US8036458B2 (en) * | 2007-11-08 | 2011-10-11 | DigitalOptics Corporation Europe Limited | Detecting redeye defects in digital images |
US7616233B2 (en) * | 2003-06-26 | 2009-11-10 | Fotonation Vision Limited | Perfecting of digital image capture parameters within acquisition devices using face detection |
KR20030066512A (en) * | 2003-07-04 | 2003-08-09 | 김재민 | Iris Recognition System Robust to noises |
US20050140801A1 (en) * | 2003-08-05 | 2005-06-30 | Yury Prilutsky | Optimized performance and performance for red-eye filter method and apparatus |
US9412007B2 (en) * | 2003-08-05 | 2016-08-09 | Fotonation Limited | Partial face detector red-eye filter method and apparatus |
US8520093B2 (en) * | 2003-08-05 | 2013-08-27 | DigitalOptics Corporation Europe Limited | Face tracker and partial face tracker for red-eye filter method and apparatus |
US7317816B2 (en) * | 2003-08-19 | 2008-01-08 | Intel Corporation | Enabling content-based search of objects in an image database with reduced matching |
US7512255B2 (en) * | 2003-08-22 | 2009-03-31 | Board Of Regents, University Of Houston | Multi-modal face recognition |
US6984039B2 (en) * | 2003-12-01 | 2006-01-10 | Eastman Kodak Company | Laser projector having silhouette blanking for objects in the output light path |
US20050129334A1 (en) * | 2003-12-12 | 2005-06-16 | Wilder Daniel V. | Event photo retrieval system and method |
GB2410876A (en) * | 2004-02-04 | 2005-08-10 | Univ Sheffield Hallam | Matching surface features to render 3d models |
US20110102643A1 (en) * | 2004-02-04 | 2011-05-05 | Tessera Technologies Ireland Limited | Partial Face Detector Red-Eye Filter Method and Apparatus |
US7428322B2 (en) * | 2004-04-20 | 2008-09-23 | Bio-Rad Laboratories, Inc. | Imaging method and apparatus |
US7388580B2 (en) * | 2004-05-07 | 2008-06-17 | Valve Corporation | Generating eyes for a character in a virtual environment |
JP2005339288A (en) * | 2004-05-27 | 2005-12-08 | Toshiba Corp | Image processor and its method |
TW200540732A (en) * | 2004-06-04 | 2005-12-16 | Bextech Inc | System and method for automatically generating animation |
US7660482B2 (en) * | 2004-06-23 | 2010-02-09 | Seiko Epson Corporation | Method and apparatus for converting a photo to a caricature image |
US7454039B2 (en) * | 2004-07-12 | 2008-11-18 | The Board Of Trustees Of The University Of Illinois | Method of performing shape localization |
US8381135B2 (en) | 2004-07-30 | 2013-02-19 | Apple Inc. | Proximity detector in handheld device |
US7692683B2 (en) * | 2004-10-15 | 2010-04-06 | Lifesize Communications, Inc. | Video conferencing system transcoder |
US20060248210A1 (en) * | 2005-05-02 | 2006-11-02 | Lifesize Communications, Inc. | Controlling video display mode in a video conferencing system |
FR2890215B1 (en) * | 2005-08-31 | 2007-11-09 | St Microelectronics Sa | DIGITAL PROCESSING OF AN IMAGE OF IRIS |
US8645985B2 (en) | 2005-09-15 | 2014-02-04 | Sony Computer Entertainment Inc. | System and method for detecting user attention |
US8616973B2 (en) * | 2005-09-15 | 2013-12-31 | Sony Computer Entertainment Inc. | System and method for control by audible device |
US7599527B2 (en) * | 2005-09-28 | 2009-10-06 | Facedouble, Inc. | Digital image search system and method |
US7599577B2 (en) * | 2005-11-18 | 2009-10-06 | Fotonation Vision Limited | Method and apparatus of correcting hybrid flash artifacts in digital images |
EP1983898B1 (en) * | 2006-02-03 | 2017-01-18 | Koninklijke Philips N.V. | Ultrasonic method and apparatus for measuring or detecting flow behavior of a non-sinusoidal periodicity |
EP1987475A4 (en) | 2006-02-14 | 2009-04-22 | Fotonation Vision Ltd | Automatic detection and correction of non-red eye flash defects |
CA2654960A1 (en) | 2006-04-10 | 2008-12-24 | Avaworks Incorporated | Do-it-yourself photo realistic talking head creation system and method |
US20070243930A1 (en) * | 2006-04-12 | 2007-10-18 | Gary Zalewski | System and method for using user's audio environment to select advertising |
US20070255630A1 (en) * | 2006-04-17 | 2007-11-01 | Gary Zalewski | System and method for using user's visual environment to select advertising |
US20070244751A1 (en) * | 2006-04-17 | 2007-10-18 | Gary Zalewski | Using visual environment to select ads on game platform |
EP2033142B1 (en) * | 2006-06-12 | 2011-01-26 | Tessera Technologies Ireland Limited | Advances in extending the aam techniques from grayscale to color images |
US20070297651A1 (en) * | 2006-06-23 | 2007-12-27 | Schubert Peter J | Coutour-based object recognition method for a monocular vision system |
EP2104930A2 (en) | 2006-12-12 | 2009-09-30 | Evans & Sutherland Computer Corporation | System and method for aligning rgb light in a single modulator projector |
US8055067B2 (en) * | 2007-01-18 | 2011-11-08 | DigitalOptics Corporation Europe Limited | Color segmentation |
US7792379B2 (en) * | 2007-02-06 | 2010-09-07 | Accenture Global Services Gmbh | Transforming a submitted image of a person based on a condition of the person |
WO2008109708A1 (en) * | 2007-03-05 | 2008-09-12 | Fotonation Vision Limited | Red eye false positive filtering using face location and orientation |
JP4988408B2 (en) | 2007-04-09 | 2012-08-01 | 株式会社デンソー | Image recognition device |
US8319814B2 (en) * | 2007-06-22 | 2012-11-27 | Lifesize Communications, Inc. | Video conferencing system which allows endpoints to perform continuous presence layout selection |
US8139100B2 (en) * | 2007-07-13 | 2012-03-20 | Lifesize Communications, Inc. | Virtual multiway scaler compensation |
US8503818B2 (en) | 2007-09-25 | 2013-08-06 | DigitalOptics Corporation Europe Limited | Eye defect detection in international standards organization images |
US8212864B2 (en) * | 2008-01-30 | 2012-07-03 | DigitalOptics Corporation Europe Limited | Methods and apparatuses for using image acquisition data to detect and correct image defects |
US20090196524A1 (en) * | 2008-02-05 | 2009-08-06 | Dts Digital Images, Inc. | System and method for sharpening of digital images |
EP2105845B1 (en) | 2008-03-28 | 2018-05-02 | Neoperl GmbH | Identification method |
US8358317B2 (en) | 2008-05-23 | 2013-01-22 | Evans & Sutherland Computer Corporation | System and method for displaying a planar image on a curved surface |
US8702248B1 (en) | 2008-06-11 | 2014-04-22 | Evans & Sutherland Computer Corporation | Projection method for reducing interpixel gaps on a viewing surface |
US8411963B2 (en) * | 2008-08-08 | 2013-04-02 | The Nielsen Company (U.S.), Llc | Methods and apparatus to count persons in a monitored environment |
US8081254B2 (en) * | 2008-08-14 | 2011-12-20 | DigitalOptics Corporation Europe Limited | In-camera based method of detecting defect eye with high accuracy |
US8514265B2 (en) * | 2008-10-02 | 2013-08-20 | Lifesize Communications, Inc. | Systems and methods for selecting videoconferencing endpoints for display in a composite video image |
US20100110160A1 (en) * | 2008-10-30 | 2010-05-06 | Brandt Matthew K | Videoconferencing Community with Live Images |
US8077378B1 (en) | 2008-11-12 | 2011-12-13 | Evans & Sutherland Computer Corporation | Calibration system and method for light modulation device |
US20100142755A1 (en) * | 2008-11-26 | 2010-06-10 | Perfect Shape Cosmetics, Inc. | Method, System, and Computer Program Product for Providing Cosmetic Application Instructions Using Arc Lines |
US8643695B2 (en) * | 2009-03-04 | 2014-02-04 | Lifesize Communications, Inc. | Videoconferencing endpoint extension |
US8456510B2 (en) * | 2009-03-04 | 2013-06-04 | Lifesize Communications, Inc. | Virtual distributed multipoint control unit |
US8472681B2 (en) * | 2009-06-15 | 2013-06-25 | Honeywell International Inc. | Iris and ocular recognition system using trace transforms |
US8350891B2 (en) * | 2009-11-16 | 2013-01-08 | Lifesize Communications, Inc. | Determining a videoconference layout based on numbers of participants |
US8471889B1 (en) * | 2010-03-11 | 2013-06-25 | Sprint Communications Company L.P. | Adjusting an image for video conference display |
WO2012140782A1 (en) * | 2011-04-15 | 2012-10-18 | アイシン精機株式会社 | Eyelid-detection device, eyelid-detection method, and program |
US8620088B2 (en) | 2011-08-31 | 2013-12-31 | The Nielsen Company (Us), Llc | Methods and apparatus to count people in images |
JP2013048717A (en) | 2011-08-31 | 2013-03-14 | Sony Corp | Image processing device and method, recording medium, and program |
US8769435B1 (en) * | 2011-09-16 | 2014-07-01 | Google Inc. | Systems and methods for resizing an icon |
US9641826B1 (en) | 2011-10-06 | 2017-05-02 | Evans & Sutherland Computer Corporation | System and method for displaying distant 3-D stereo on a dome surface |
US9342735B2 (en) * | 2011-12-01 | 2016-05-17 | Finding Rover, Inc. | Facial recognition lost pet identifying system |
US9361672B2 (en) * | 2012-03-26 | 2016-06-07 | Google Technology Holdings LLC | Image blur detection |
TWI521469B (en) * | 2012-06-27 | 2016-02-11 | Reallusion Inc | Two - dimensional Roles Representation of Three - dimensional Action System and Method |
US9165535B2 (en) * | 2012-09-27 | 2015-10-20 | Google Inc. | System and method for determining a zoom factor of content displayed on a display device |
JP6532642B2 (en) | 2013-07-30 | 2019-06-19 | 富士通株式会社 | Biometric information authentication apparatus and biometric information processing method |
JP6077425B2 (en) * | 2013-09-25 | 2017-02-08 | Kddi株式会社 | Video management apparatus and program |
CN103632165B (en) * | 2013-11-28 | 2017-07-04 | 小米科技有限责任公司 | A kind of method of image procossing, device and terminal device |
KR20230150397A (en) | 2015-08-21 | 2023-10-30 | 매직 립, 인코포레이티드 | Eyelid shape estimation using eye pose measurement |
EP3337383A4 (en) * | 2015-08-21 | 2019-04-03 | Magic Leap, Inc. | Eyelid shape estimation |
EP3761232A1 (en) | 2015-10-16 | 2021-01-06 | Magic Leap, Inc. | Eye pose identification using eye features |
US11711638B2 (en) | 2020-06-29 | 2023-07-25 | The Nielsen Company (Us), Llc | Audience monitoring systems and related methods |
US11860704B2 (en) | 2021-08-16 | 2024-01-02 | The Nielsen Company (Us), Llc | Methods and apparatus to determine user presence |
US11758223B2 (en) | 2021-12-23 | 2023-09-12 | The Nielsen Company (Us), Llc | Apparatus, systems, and methods for user presence detection for audience monitoring |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
SE365325B (en) * | 1971-11-04 | 1974-03-18 | Rothfjell R | |
JPS58201185A (en) * | 1982-05-19 | 1983-11-22 | Toshiba Corp | Position detector |
GB8528143D0 (en) * | 1985-11-14 | 1985-12-18 | British Telecomm | Image encoding & synthesis |
JPH01502368A (en) * | 1986-05-07 | 1989-08-17 | コステロ,ブレンダン,デビッド | How to prove identity |
US4754487A (en) * | 1986-05-27 | 1988-06-28 | Image Recall Systems, Inc. | Picture storage and retrieval system for various limited storage mediums |
US4906940A (en) * | 1987-08-24 | 1990-03-06 | Science Applications International Corporation | Process and apparatus for the automatic detection and extraction of features in images and displays |
US4975969A (en) * | 1987-10-22 | 1990-12-04 | Peter Tal | Method and apparatus for uniquely identifying individuals by particular physical characteristics and security system utilizing the same |
US5012522A (en) * | 1988-12-08 | 1991-04-30 | The United States Of America As Represented By The Secretary Of The Air Force | Autonomous face recognition machine |
GB8905731D0 (en) * | 1989-03-13 | 1989-04-26 | British Telecomm | Pattern recognition |
-
1991
- 1991-07-17 CA CA002087523A patent/CA2087523C/en not_active Expired - Lifetime
- 1991-07-17 EP EP91913053A patent/EP0539439B1/en not_active Expired - Lifetime
- 1991-07-17 US US07/983,853 patent/US5719951A/en not_active Expired - Lifetime
- 1991-07-17 WO PCT/GB1991/001183 patent/WO1992002000A1/en active IP Right Grant
- 1991-07-17 JP JP3512813A patent/JP3040466B2/en not_active Expired - Fee Related
- 1991-07-17 DE DE69131350T patent/DE69131350T2/en not_active Expired - Lifetime
Also Published As
Publication number | Publication date |
---|---|
JPH05508951A (en) | 1993-12-09 |
EP0539439B1 (en) | 1999-06-16 |
US5719951A (en) | 1998-02-17 |
JP3040466B2 (en) | 2000-05-15 |
DE69131350D1 (en) | 1999-07-22 |
EP0539439A1 (en) | 1993-05-05 |
WO1992002000A1 (en) | 1992-02-06 |
DE69131350T2 (en) | 1999-12-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2087523C (en) | Method of processing an image | |
Feng et al. | Variance projection function and its application to eye detection for human face recognition | |
CN109558764B (en) | Face recognition method and device and computer equipment | |
Hwang et al. | Reconstruction of partially damaged face images based on a morphable face model | |
Lien et al. | Automated facial expression recognition based on FACS action units | |
Yilmaz et al. | Contour-based object tracking with occlusion handling in video acquired using mobile cameras | |
Baumberg | Learning deformable models for tracking human motion | |
Brown et al. | Comparative study of coarse head pose estimation | |
Yow et al. | A probabilistic framework for perceptual grouping of features for human face detection | |
EP0907144A2 (en) | Method for extracting a three-dimensional model from a sequence of images | |
Bascle et al. | Stereo matching, reconstruction and refinement of 3D curves using deformable contours | |
Jacquin et al. | Automatic location tracking of faces and facial features in video sequences | |
CN109508700A (en) | A kind of face identification method, system and storage medium | |
CN111062328B (en) | Image processing method and device and intelligent robot | |
CN111814603A (en) | Face recognition method, medium and electronic device | |
CN111582036B (en) | Cross-view-angle person identification method based on shape and posture under wearable device | |
Kaur et al. | Comparative study of facial expression recognition techniques | |
Lengagne et al. | From 2D images to 3D face geometry | |
Dey et al. | Computer vision based gender detection from facial image | |
Gunduz et al. | Facial feature extraction using topological methods | |
Ming et al. | A unified 3D face authentication framework based on robust local mesh SIFT feature | |
Mahoor et al. | Multi-modal ear and face modeling and recognition | |
Chomat et al. | Recognizing motion using local appearance | |
CN110135362A (en) | A kind of fast face recognition method based under infrared camera | |
Nikolaidis et al. | Facial feature extraction and determination of pose |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request | ||
MKLA | Lapsed | ||
MKEC | Expiry (correction) |
Effective date: 20121202 |