Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20020150281 A1
Publication typeApplication
Application numberUS 09/946,714
Publication dateOct 17, 2002
Filing dateSep 5, 2001
Priority dateMar 6, 2001
Also published asCN1258733C, CN1493056A, EP1374145A1, EP1374145A4, US7302087, US20040114781, US20100290676, WO2002071317A1
Publication number09946714, 946714, US 2002/0150281 A1, US 2002/150281 A1, US 20020150281 A1, US 20020150281A1, US 2002150281 A1, US 2002150281A1, US-A1-20020150281, US-A1-2002150281, US2002/0150281A1, US2002/150281A1, US20020150281 A1, US20020150281A1, US2002150281 A1, US2002150281A1
InventorsSeong-Won Cho
Original AssigneeSeong-Won Cho
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Method of recognizing human iris using daubechies wavelet transform
US 20020150281 A1
Abstract
The present invention relates to a method of recognizing the human iris using the Daubechies wavelet transform. The dimensions of characteristic vectors are initially reduced by extracting iris features from the inputted iris image signals through the Daubechies wavelet transform. Then, the binary characteristic vectors are generated by applying quantization functions to the extracted characteristic values so that the utility of human iris recognition can be improved as the storage capacity and processing time thereof can be reduced by generating low capacity characteristic vectors. By measuring the similarity between the generated characteristic vectors and the previously registered characteristic vectors, characteristic vectors indicative of the iris patterns can be realized.
Images(8)
Previous page
Next page
Claims(8)
What is claimed is:
1. A method of recognizing a human iris using the Daubechies wavelet transform, the method comprising the steps of:
(a) obtaining an iris image from a user's eye using an image acquisition device;
(b) repeatedly performing said Daubechies wavelet transform on said iris image so as to multi-divide said iris image for a predetermined number of times;
(c) extracting image with high frequency components from said multi-divided image so as to extract iris features;
(d) extracting characteristic values of a characteristic vector from said extracted image with said high frequency components;
(e) generating a binary characteristic vector by quantizing said extracted characteristic values; and,
(f) determining whether said user as an enrollee by measuring a similarity between said generated characteristic vector and a previously registered characteristic vector.
2. The method of claim 1, further comprising the step of illuminating said user's eye.
3. The method of claim 2, wherein the step of illuminating said user's eye comprises the step of placing a halogen lamp at both ends of said user's eye.
4. The method of claim 1, wherein said step (b) comprises the steps of: extracting a region HH from said multi-divided image having said high frequency components in both x and y directions; storing information of said region HH for use in extracting iris features; performing multi-division of a region LL from said multi-divided image having low frequency components in both x and y directions.
5. The method of claim 2, wherein said predetermined number of times is set at four.
6. The method of claim 1, wherein said step (c) comprises the steps of: receiving multi-divided images of a plurality of high frequency regions HHi formed by said multi-division in said step (b); calculating the average values of regions HH1 to HHn−1 excluding the last region HHN; assigning said calculated average values to the components of said characteristic vector, respectively; assigning said calculated value M of said last region HHN to the components of said binary characteristic vector; combining said N−1 average values and said M values so as to generate a (M+N−1)-dimensional characteristic vector; and, quantizing all values of said generated characteristic vector into binary values so as to generate a final (M+N−1)-dimensional characteristic vector.
7. The method of claim 1, wherein said step (f) comprises the steps of: applying predetermined weights to the i-th dimensions of said generated characteristic vector generated from said step (c) and said previously registered characteristic vector; calculating the inner product S of said two weighted characteristic vectors; and determining said user as an enrollee if said inner product S is more than a verification reference value C.
8. The method of claim 1, wherein said image acquisition device comprises a halogen lamp.
Description
CLAIM OF PRIORITY

[0001] This application makes reference to, incorporates the same herein, and claims all benefits accruing under 35 U.S.C. Section 119 from an application for “Method of Recognizing Human Iris Using Daubechies Wavelet Transform,” filed earlier in the Korean Industrial Property Office on Mar. 6, 2001, and there duly assigned Serial No. 2001-11440.

BACKGROUND OF THE INVENTION

[0002] 1. Field of Invention

[0003] The present invention relates to a method of recognizing the human iris and, more particularly, to a method of recognizing the human iris using the Daubechies wavelet transform to reduce the dimensions of characteristic vectors to improve the processing time.

[0004] 2. Description of the Related Art

[0005] An iris recognition system is used for performing the identification of an individual based on the information obtained from the analysis of the iris patterns, which are different for each individual. The iris recognition system has superior identification accuracy and thus provides excellent security when compared to other biometric methods that use voice and fingerprints for identification.

[0006] A wavelet transform is typically used to extract the characteristics of the iris images and involves analyzing signals in a multi-resolution mode. The wavelet transform is a mathematical theory used for formulating a model for systems, signals, and a series of processes using selected signals based on the Fourier transform. These signals are referred to as little waves or wavelets. Recently, the wavelet transform is widely employed in the field of signal and image processing as it has a faster rate when compared with the traditional signal processing algorithm, and it can efficiently achieve signal localization in time and frequency domains. The images are obtained by extracting the iris patterns from an iris image that are acquired by an image acquisition device, then patterns normalized in the 450ื60 size are used to extract the characteristic values using the wavelet transform.

[0007] There are other types of wavelet transmform known in the art. For example, the Harr wavelet transform has been widely used also in the conventional iris recognition systems, image processing, and the like. However, the Harr wavelet transform has disadvantages in that the characteristic values change irregularly and rapidly. In addition, a high resolution of the images cannot be obtained if the images are decompressed again after they have been compressed. In contrast, the Daubechies wavelet transform is a continuous function, thus the disadvantages associated with the Harr wavelet functions can be avoided in certain instances for extracting more accurate and delicate characteristic values. If the images are decompressed again after they have been compressed using the Daubechies wavelet transform, the images can be restored with a high resolution quality back to the original images if the Harr wavelet transform is used. However, as the Daubechies wavelet functions are generally more complicated than the Harr wavelet functions, there is a disadvantage in that a larger arithmetic quantity may be needed. A main advantage of the Daubechies wavelet transform is that it provides fine characteristic values when performing the wavelet transform to extract the characteristic values. That is, if the Daubechies wavelet transform is used, the identification of the iris features can be made with a lower number of data, and the extraction of the iris features can be made accurately.

[0008] Another method of extracting the characteristic values indicative of the iris patterns and forming the characteristic vectors uses the Gabor transform. However, the characteristic vectors generated by this method require 256 or more dimensions and at least 256 bytes, where one byte is assigned to one dimension. Thus, there is a problem in that practicability and efficiency are undermined when the Gabor transform is used in the field if low capacity information is required.

[0009] The Hamming distance (HD) is used to verify the two characteristic vectors generated in the form of binary vectors. The method of measuring a distance, such as the Hamming distance (HD) between two characteristic vectors (i.e., characteristic vectors relevant to the input pattern and the stored reference characteristic vectors) for the pattern classification is disclosed in U.S. Pat. No. 5,291,560, the teachings of which are incorporated herein by reference. The bit values assigned according to the respective dimension are compared with each other. If they are identical to each other, 0 is given; and if they are different from each other, 1 is given. Then, the value divided by the total number of dimensions is obtained as a final result. Hence, this method is simple and useful in discriminating the degree of similarity between the characteristic vectors consisting of binary codes. The comparison result of all the bits becomes 0 if identical data are compared with each other. Thus, the result approaching 0 implies that the data belong to the persons themselves. If the data do indeed belong to the person, the probability of the degree of similarity will be 0.5. Accordingly, a proper limit set between 0 and 0.5 will be a boundary for differentiating between people. The Hamming distance (HD) is also excellent for application with the extracted iris features by subdividing the data, but it is not suitable when low capacity data is to be used. If the total number of the bits of the characteristic vectors with 256-byte information is 2048, considerably high acceptance rates are realized even though the Hamming distance is applied. In addition, there are disadvantages in that the formation of the reference characteristic vectors through generalizing the pattern information cannot be easily made, and one can not rely upon the information characteristics of each dimension of the characteristic vectors.

[0010] Accordingly, if the low capacity characteristic vectors are used, the accuracy of differentiating characteristic vectors is poor due to an increase in lost information. Thus, a method of preventing information loss while maintaining the minimum capacity of the characteristic vectors is needed in generating the characteristic vectors. Accordingly, there is a need for a method of forming the low capacity characteristic vectors, so that the processing, storage, transfer, search, and the like of the pattern information can be achieved efficiently.

SUMMARY OF INVENTION

[0011] The present invention is directed to a method of forming low capacity characteristic vectors, so that the false acceptance rate (FAR) and the false rejection rate (FRR) can be remarkably reduced as compared to the conventional Harr wavelet transform. To this end, the iris features from inputted iris image signals are extracted using the Daubechies wavelet transform.

[0012] One aspect of the present invention provides a method for measuring the similarity between the characteristic vectors, wherein the low capacity characteristic vectors can be properly used for the similarity measurement while the loss of information can be minimized.

[0013] Another aspect of the present invention provides a method for recognizing the human iris using the Daubechies wavelet transform, wherein the iris image from an eye using an image acquisition device with a halogen lamp illuminator is provided. The method includes the steps of: (a) repeatedly performing the Daubechies wavelet transform of the iris image at predetermined times to multi-divide the iris image, and extracting an image including the high frequency components from the multi-divided image to extract iris features; (b) extracting the characteristic values of a characteristic vector from the extracted image with the high frequency components, and generating a binary characteristic vector by quantizing the relevant characteristic values; and, (c) determining the user as an enrollee based on the similarity between the generated characteristic vector and a previously registered characteristic vector.

[0014] According to another aspect of the present invention, the iris image is acquired through an image acquisition device utilizing a halogen lamp as an illuminator. By repeatedly performing the Daubechies wavelet transform of the inputted iris image, the iris image is multi-divided, and the iris features with optimized sizes are extracted. The characteristic vector, which is effective in displaying and processing the image, is then formed by quantizing the extracted characteristic values. Furthermore, the dimension of the characteristic vector is reduced by quantizing the extracted characteristic values into binary values—that is, when a low capacity characteristic vector is formed, the method of measuring the similarity between the weight registered and the inputted characteristic vectors is used to prevent the reduction of acceptance resulting from the formation of the low capacity characteristic vector. The user authenticity is, therefore, determined by the foregoing method.

BRIEF DESCRIPTION OF THE DRAWINGS

[0015]FIG. 1 is a view illustrating the constitution of the image acquisition equipment used for performing an iris recognition method according to the present invention.

[0016]FIG. 2 is a flowchart illustrating the process of verifying an iris image according to the present invention.

[0017]FIG. 3 is a flowchart illustrating the process of multi-dividing the iris image using the Daubechies wavelet transform according to the present invention.

[0018]FIG. 4 shows an example of multi-dividing the iris image using the Daubechies wavelet transform.

[0019]FIG. 5 is a flowchart illustrating the process of forming the characteristic vector of an iris image based on the data acquired from the multi-dividing operation according to the present invention.

[0020]FIG. 6a shows a distribution example of the characteristic values of the extracted iris image.

[0021]FIG. 6b shows the quantization function for generating a binary characteristic vector from the distribution example of FIG. 6a.

[0022]FIG. 7 is a flowchart showing the procedures for determining user authenticity through a similarity test between the characteristic vectors.

DETAILED DESCRIPTION FOR PREFERRED EMBODIMENT

[0023] Hereinafter, a method of recognizing a human iris using the Daubechies wavelet transform according to the present invention will be explained in detail with reference to the accompanying drawings.

[0024]FIG. 1 shows the exemplary embodiment of the image acquisition equipment for use in recognizing a human iris according to the present invention. The image acquisition equipment includes a halogen lamp 11 for illuminating the iris in order to acquire clear iris patterns, a CCD camera 13 for photographing the eye 10 of a user through a lens 12, a frame grabber 14 connected to the CCD camera 12 for acquiring the iris image, and a monitor 15 for showing the image to the user so that the acquisition of correct images and the position of the user can be obtained as the images are acquired.

[0025] In the embodiment, the CCD camera 13 is used to acquire the eye image, and the iris recognition is made through the pattern analysis of iridial folds. However, where the iris image is acquired indoors using an ordinary illuminator, it is difficult to extract the desired pattern information as the iris image is generally gloomy. Additional illuminators should therefore be used so that the information on the iris image cannot be lost and a clear iris pattern can be obtained. In the present invention, the halogen lamp 11 with strong floodlighting effects is preferably used as a main illuminator so that the iris pattern can be clearly shown. However, it should be noted that other light sources known to those skilled in this art can be successfully used. Furthermore, as shown in FIG. 1, the loss of the iris image information and eye fatigue of the user can be avoided by placing the halogen lamp illuminators on the left and right sides of the eye in order to cause the reflective light from the lamp to be formed on the outer portions of the iris region.

[0026]FIG. 2 is a flowchart showing the operation steps for verifying the iris image for identification purposes according to the present invention. Referring to FIG. 2, the eye image is acquired through the image acquisition equipment shown in FIG. 1 in step 200. In step 210, the images of the iris regions are extracted from the acquired eye image through pre-processing and transformed into a polar coordinate system, then the transformed iris pattern is inputted to a module for extracting the features. Acquiring the iris image and transforming the image into a polar coordinate system are well known in the art that can be performed in a variety of ways. In step 220, the Daubechies wavelet transform of the inputted iris pattern transformed into the polar coordinate system is performed, and the features of the iris regions are then extracted. The extracted features would have real numbers. In step 230, a binary characteristic vector is generated by applying a K-level quantization function to the extracted features. In step 240, the similarity between the generated characteristic vector and the previously registered data of the user is measured. Through the similarity measurement, user authenticity is determined and then the verification results are obtained.

[0027] In a case where the features of the iris regions are extracted by performing the Daubechies wavelet transform as described above, the Daubechies wavelet function with eight, sixteen, or more coefficients can extract more delicate characteristic values than the Daubechies wavelet function with four coefficients, even though the former method is more complicated than the latter. Although the Daubechies wavelet function with eight or more coefficients has been used and tested in the present invention, greater performance improvement was not obtained and the arithmetic quantity and processing time are increased, as compared with a case where the Daubechies wavelet function with four coefficients is tested. Hence, the Daubechies wavelet function with four coefficients may be used for extracting the characteristic values indicative of the iris patterns.

[0028]FIG. 3 is a flowchart showing the process of multi-dividing the iris image by performing the Daubechies wavelet transform according to the present invention. FIG. 4 shows an image divided using the Daubechies wavelet transform. As shown in FIG. 4, when “L” and “H” are respectively used to indicated low frequency and high frequency components, the term “LL” indicates the component that has passed through a low-pass filter (LPF) in all x and y directions, whereas the term “HH” indicates the component that has passed through a high-pass filter (HPF) in the x and y directions. The subscript numerals signify image-dividing stages. For example, “LH2” means that the image has passed through the low-pass filter in the x direction and through the high-pass filter in the y direction during the 2-stage wavelet division.

[0029] Referring back to FIG. 3, in step 310, the inputted iris image is multi-divided using the Daubechies wavelet transform. As the iris image is considered a two-dimensional signal in which one-dimensional signals are arrayed in the x and y directions, quarterly divided components of one image should be extracted by passing through the LPF and HPF in all x and y directions in order to analyze the iris image. That is, one two-dimensional image signal is wavelet-transformed in vertical and horizontal directions, and the image is divided into four regions: LL, LH, HL, and HH after the wavelet transform has been performed once. At this time, using the Daubechies wavelet transform, the signal is divided into a differential component thereof that has passed through the high-pass filter and an average component that has passed through the low-pass filter.

[0030] The performance of the iris recognition system is evaluated in view of two factors; a false acceptance rate (FAR) and a false rejection rate (FRR). Here, the FAR means the probability that the entrance of unregistered persons (imposters) may be accepted due to the false recognition of unregistered persons as registered persons, and the FRR means the probability that entrance of registered persons (enrollees) is rejected due to false recognition of the registered persons as unregistered ones. In simulation, when the method of recognizing the human iris using the Daubechies wavelet transform according to the present invention was employed, the FAR has been reduced from 5.5% to 3.07% and the FRR has also been reduced from 5.0% to 2.25%, as compared with the method of recognizing the human iris using the conventional Harr wavelet transform.

[0031] In step 320, a region HH including only the high frequency components in the x and y directions are extracted from the divided iris image.

[0032] In step 330, after increasing the iterative number of times of dividing the iris image, the processing step is completed when the iterative number is greater than a predetermined number. Alternatively, if the iterative number is lower than the predetermined number, the information on the region HH is stored for use in extracting the iris features in step 340.

[0033] In step 350, the region LL comprising only low frequency components in the x and y directions is extracted from the multi-divided iris image. As the extracted region LL (corresponding to the image reduced in a fourth size as compared with the previous image) includes major information on the iris image, it is provided as an image to be newly processed so that the wavelet transform can be applied again to the relevant region. Thereafter, the Daubechies wavelet transform is repeated again from step 310.

[0034] In a case where the iris image is transformed from the Cartesian coordinate system to the polar coordinate system, in order to avoid changes in the iris features according to variations in the size of the pupil, the region between the inner and outer boundaries of the iris is divided into 60 segments in the r direction and 450 segments in the θ direction by varying the angles by 0.8 degrees. Finally, the information on the iris image is acquired and normalized as 450ื60 (θืr) data. Then, if the acquired iris image is once again wavelet-transformed, the characteristics of the 225ื30 region HH1 of which size is reduced by half are obtained, namely, the 225ื30 information is used as a characteristic vector. This information may be used as it is, but the process of dividing the signals is repeatedly performed in order to reduce the information size. Since the region LL includes major information on the iris image, the characteristic values of further reduced regions, such as HH2, HH3, and HH4, are obtained by successively applying the wavelet transform to the respective relevant regions.

[0035] The iterative number, which is provided as a discriminating criterion for repeatedly performing the wavelet transform, should be set as an optimal value in consideration of the loss of the information and the size of the characteristic vector. Therefore, in the present invention, the region HH4 obtained by performing the wavelet transform four times becomes a major characteristic region, and the values thereof are selected as the components of the characteristic vector. At this time, the region HH4 contains the information having 84 (=28ื3) data.

[0036]FIG. 5 is a flowchart showing the process of forming the characteristic vector of the iris image using the data acquired from the multi-divided iris image according to the present invention. Referring to FIG. 5, the information on the n characteristic vector extracted from the above process, i.e., the information on the regions HH1, HH2, HH3, and HH4 is inputted in step 510. In step 520, in order to acquire the characteristic information on the regions HH1, HH2, and HH3 excluding the information on the region HH4 obtained through the last wavelet transform among the n characteristic vector, each average value of the regions HH1, HH2, and HH3 is calculated and assigned one dimension. In step 530, all values of the final obtained region HH4 are extracted as the characteristic values thereof. After extraction of the characteristics of the iris image signals has been completed, the characteristic vector is generated based on these characteristics. A module for generating the characteristic vector mainly performs the processes of extracting the characteristic values in the form of real numbers and then transforming them to binary codes consisting of 0 and 1.

[0037] However, in step 540, the N−1 characteristic values extracted from step 520 and the M (the size of the final obtained region HH) characteristic values extracted from step 530 are combined and (M+N−1)-dimensional characteristic vector is generated. That is, the total 87 data, which the 84 data of the region HH4 and the 3 average data of the regions HH1, HH2, and HH3 are combined, are used as a characteristic vector in the present invention.

[0038] In step 550, the values of the previously obtained characteristic vector, i.e., the respective component values of the characteristic vector expressed in the form of the real numbers, are quantized into binary values 0 or 1. In step 560, the resultant (M+N−1)-bit characteristic vector is generated by the quantized values. That is, according to the present invention, the resultant 87-bit characteristic vector is generated.

[0039]FIG. 6a shows a distribution example of the characteristic values of the extracted iris image. When the values of the 87-dimensional characteristic vector are distributed according to the respective dimensions, the distribution roughly takes the shape of FIG. 6a. The binary vector including all the dimensions is generated by the following Equation 1.

f n=0iff(n)<0

f n=1 if f(n)>0  (1),

[0040] where f(n) is a characteristic value of the n-th dimension, and fn is the value of the n-th characteristic vector

[0041] When the 87-bit characteristic vector that is obtained by assigning one bit to the total 87 dimensions are generated in order to use a low capacity characteristic vector, the improvement of the recognition rate is limited to some extent as loss of the information on the iris image is increased. Therefore, when generating the characteristic vector, it is necessary to prevent information loss while maintaining the minimum capacity of the characteristic vector.

[0042]FIG. 6b shows a quantization function for generating a binary characteristic vector from the distribution example of the characteristic values shown in FIG. 6a. The extracted (M+N−1)-dimensional characteristic vector shown in FIG. 6a is evenly distributed mostly between 1 and −1 in view of its magnitude. Then, the binary vector is generated by applying the K-level quantization function shown in FIG. 6a to the characteristic vector. Since only signs of the characteristic values are obtained through the process of Equation 1, it is understood that information on the magnitude has been discarded. Thus, in order to accept the magnitude of the characteristic vector, a 4-level quantization process was utilized in the present invention.

[0043] As described above, in order to efficiently compare the characteristic vector generated through the 4-level quantization with the registered characteristic vector, the quantization levels have the weights expressed in the following Equation 2.

f n=4 if f(n)≧0.5 (level 4)

f n=1 if 0.5>f(n)≧0 (level 3)

f n=−1 if 0>f(n)>−0.5 (level 2)

f n=−4 if f(n)≦−0.5 (level 1)  (2),

[0044] where fn represents the n-th dimension of the previously registered characteristic vector fR of the user or the characteristic vector fT of the user generated from the iris image of the eye image of the user. An explanation of how to use the weights expressed in Equation 2, is as follows.

[0045] In a case where the n-th dimensional characteristic value f(n) is equal or more than 0.5 (level 4), the value of the i-th dimension fRi or fTi is converted and assigned “4” if the value is “11.” In a case where the n-th dimensional characteristic value f(n) is more than 0 and less than 0.5 (level 3), the value of the i-th dimension fRi or fTi is converted and assigned “1” if the value is “10.” In a case where the n-th dimensional characteristic value f(n) is more than −0.5 and less than 0 (level 2), the value of the i-th dimension fRi or fTi is converted and assigned −1 if the value is “01.” In a case where the n-th dimensional characteristic value f(n) is equal to or less than −0.5 (level 1), the value of the i-th dimension fRi or fTi is converted and assigned −4 if the value is “00.” This is due to the weights being applied to the respective values as expressed in Equation 2 as it is suitable for the following verification method of the present invention.

[0046]FIG. 7 is a flowchart showing the procedures for discriminating user authenticity through the similarity measurement test between the characteristic vectors. Referring to FIG. 7, in step 710, the characteristic vector fT of the user is generated from the iris image of the eye image of the user. Step 720, searches the previously registered characteristic vector fR of the user. In step 730, in order to measure the similarity between the two characteristic vectors, the weights are assigned to the characteristic vectors fR and fT depending on the value of the binary characteristic vector based on Equation 2.

[0047] In step 740, an inner product or scalar product S of the two characteristic vectors is calculated and the similarity is finally measured. Among the measures generally used for determining the correlation between the registered characteristic vector fR and the characteristic vector fT of the user, it is the inner product S of the two characteristic vectors that indicate the most direct association. That is, after the weights have been assigned to the respective data of the characteristic vector in step 730, the inner product S of the two characteristic vectors is used to measure the similarity between the two vectors.

[0048] The following Equation 3 is used for calculating the inner product of the two characteristic vectors. S = i = 1 n f Ri f Ti = ( f R1 f T1 + f R2 f T2 + … + f Rn f Tn ) , ( 3 )

[0049] where fR is the characteristic vector of the user that has been already registered, and fT is the characteristic vector of the user that is generated from the iris image of the eye of the user.

[0050] According to the above processes, one effect, which can be obtained by the quantization according to the sign of the characteristic vector values as in the method in which the binary vector, is generated with respect to the values of the characteristic vector extracted from the iris image according to the respective dimensions. That is, like the Hamming distance, the difference between 0 and 1 can be expressed. In a case where the two characteristic vectors have the same-signed values with respect to each dimension, positive values are added to the inner product S of the two characteristic vectors. Otherwise, negative values are added to the inner product S of the two vectors. Consequently, the inner product S of the two characteristic vectors increases if the two data belong to an identical person, while the inner product S of the two characteristic vectors decreases if the two data do not belong to an identical person.

[0051] In step 750, the user authenticity is determined according to the measured similarity obtained from the inner product S of the two characteristic vectors. At this time, the determination of the user authenticity based on the measured similarity depends on the following Equation 4.

If S>C, then TRUE or else FALSE  (4),

[0052] where C is a reference value for verifying the similarity between the two characteristic vectors.

[0053] That is, if the inner product S of the two characteristic vectors is equal to or more than the verification reference value C, the user is determined as an enrollee. Otherwise, the user is determined as an imposter.

[0054] As described above, the method of recognizing the human iris using the Daubechies wavelet transform according to the present invention has an advantage in that FAR and FRR can be remarkably reduced as compared with the method using the conventional Harr wavelet transform, as the iris features are extracted from the inputted iris image signals through the Daubechies wavelet transform.

[0055] Furthermore, in order to verify the similarity between the registered and extracted characteristic vectors fR and fT, the inner product S of the two characteristic vectors is calculated, and the user authenticity is determined based on the measured similarity obtained by the calculated inner product S of the two vectors. Therefore, there is provided a method of measuring the similarity between the characteristic vectors wherein the loss of the information, which may be produced by forming the low capacity characteristic vectors, can be minimized.

[0056] The foregoing is a mere embodiment for embodying the method of recognizing the human iris using the Daubechies wavelet transform according to the present invention. However, the present invention is not limited to the embodiment described above. A person skilled in the art can make various modifications and changes to the present invention without departing from the technical spirit and the scope of the present invention defined by the appended claims.

Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US7436986Mar 25, 2003Oct 14, 2008Bausch & Lomb IncorporatedPositive patient identification
US7890335 *Oct 10, 2002Feb 15, 2011Texas Instruments IncorporatedSharing wavelet domain components among encoded signals
US8160368 *Feb 2, 2007Apr 17, 2012Japan Science And Technology AgencyImage feature extraction method and image compression method
US8285005 *Aug 11, 2009Oct 9, 2012Honeywell International Inc.Distance iris recognition
US20090324064 *Feb 2, 2007Dec 31, 2009Japan Science And Technology AgencyImage feature extraction method and image compression method
US20100002913 *Aug 11, 2009Jan 7, 2010Honeywell International Inc.distance iris recognition
US20100260390 *Nov 29, 2006Oct 14, 2010The Research Foundation Of State University Of New YorkSystem and method for reduction of false positives during computer aided polyp detection
WO2004084726A1 *Mar 16, 2004Oct 7, 2004Bausch & LombPositive patient identification
Classifications
U.S. Classification382/117
International ClassificationG06T1/00, A61B5/117, G06T7/00, G06F21/20, G06K9/00, G06F17/14
Cooperative ClassificationG06K9/00597
European ClassificationG06K9/00S
Legal Events
DateCodeEventDescription
Oct 12, 2001ASAssignment
Owner name: EVERMEDIA CO., LTD., KOREA, REPUBLIC OF
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CHO, SEONG-WON;REEL/FRAME:012258/0232
Effective date: 20010810