|Publication number||US7593602 B2|
|Application number||US 10/537,540|
|Publication date||Sep 22, 2009|
|Filing date||Nov 24, 2003|
|Priority date||Dec 19, 2002|
|Also published as||CA2505779A1, CA2505779C, CN1723456A, CN100449537C, DE60321960D1, EP1573589A2, EP1573589B1, US20060050993, WO2004057493A2, WO2004057493A3|
|Publication number||10537540, 537540, PCT/2003/5096, PCT/GB/2003/005096, PCT/GB/2003/05096, PCT/GB/3/005096, PCT/GB/3/05096, PCT/GB2003/005096, PCT/GB2003/05096, PCT/GB2003005096, PCT/GB200305096, PCT/GB3/005096, PCT/GB3/05096, PCT/GB3005096, PCT/GB305096, US 7593602 B2, US 7593602B2, US-B2-7593602, US7593602 B2, US7593602B2|
|Inventors||Frederick W M Stentiford|
|Original Assignee||British Telecommunications Plc|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (53), Non-Patent Citations (60), Referenced by (14), Classifications (15), Legal Events (2)|
|External Links: USPTO, USPTO Assignment, Espacenet|
This application is the US national phase of international application PCT/GB2003/005096 filed 24 Nov. 2003 which designated the U.S. and claims benefit of GB 0229625.9, dated 19 Dec. 2002, the entire content of which is hereby incorporated by reference.
1. Technical Field
This application involves retrieval of stored images with metadata.
2. Related Art
The wide availability of digital sensor technology together with the falling price of storage devices has spurred an exponential growth in the volume of image material being captured for a range of applications. Digital image collections are rapidly increasing in size and include basic home photos, image based catalogues, trade marks, fingerprints, mugshots, medical images, digital museums, and many art and scientific collections. It is not surprising that a great deal of research effort over the last five years has been directed at developing efficient methods for browsing, searching and retrieving images [1,2].
Content-based image retrieval requires that visual material be annotated in such a way that users can retrieve the images they want efficiently and effortlessly. Current systems rely heavily upon textual tagging and measures (eg colour histograms) that do not reflect the image semantics. This means that users must be very conversant with the image features being employed by the retrieval system in order to obtain sensible results and are forced to use potentially slow and unnatural interfaces when dealing with large image databases. Both these barriers not only prevent the user from exploring the image set with high recall and precision rates, but the process is slow and places a great burden on the user.
Early retrieval systems made use of textual annotation  but these approaches do not always suit retrieval from large databases because of the cost of the manual labour involved and the inconsistent descriptions, which by their nature are heavily dependent upon the individual subjective interpretation placed upon the material by the human annotator. To combat these problems techniques have been developed for image indexing that are based on their visual content rather than highly variable linguistic descriptions.
It is the job of an image retrieval system to produce images that a user wants. In response to a user's query the system must offer images that are similar in some user-defined sense. This goal is met by selecting features thought to be important in human visual perception and using them to measure relevance to the query. Colour, texture, local shape and layout in a variety of forms are the most widely used features in image retrieval [4,5,6,7,8,9,10]. One of the first commercial image search engines was QBIC  which executes user queries against a database of pre-extracted features. VisualSEEk  and SaFe  determine similarity by measuring image regions using both colour parameters and spatial relationships and obtain better performance than histogramming methods that use colour information alone. NeTra  also relies upon image segmentation to carry out region-based searches that allow the user to select example regions and lay emphasis on image attributes to focus the search. Region-based querying is also favoured in Blobworld  where global histograms are shown to perform comparatively poorly on images containing distinctive objects. Similar conclusions were obtained in comparisons with the SIMPLIcity system . The Photobook system  endeavors to use compressed representations that preserve essential similarities and are “perceptually complete”. Methods for measuring appearance, shape and texture are presented for image database search, but the authors point out that multiple labels can be justifiably assigned to overlapping image regions using varied notions of similarity.
Analytical segmentation techniques are sometimes seen as a way of decomposing images into regions of interest and semantically useful structures [21-23,45]. However, object segmentation for broad domains of general images is difficult, and a weaker form of segmentation that identifies salient point sets may be more fruitful .
Relevance feedback is often proposed as a technique for overcoming many of the problems faced by fully automatic systems by allowing the user to interact with the computer to improve retrieval performance [31,43]. In Quicklook  and ImageRover  items identified by the user as relevant are used to adjust the weights assigned to the similarity function to obtain better search performance. More information is provided to the systems by the users who have to make decisions in terms specified by the machine. MetaSeek maintains a performance database of four different online image search engines and directs new queries to the best performing engine for that task . PicHunter  has implemented a probabilistic relevance feedback mechanism that predicts the target image based upon the content of the images already selected by the user during the search. This reduces the burden on unskilled users to set quantitative pictorial search parameters or to select images that come closest to meeting their goals. Most notably the combined use of hidden semantic links between images improved the system performance for target image searching. However, the relevance feedback approach requires the user to reformulate his visual interests in ways that he frequently does not understand.
Region-based approaches are being pursued with some success using a range of techniques. The SIMPLIcity system  defines an integrated region matching process which weights regions with ‘significance credit’ in accordance with an estimate of their importance to the matching process. This estimate is related to the size of the region being matched and whether it is located in the centre of the image and will tend to emphasise neighbourhoods that satisfy these criteria. Good image discrimination is obtained with features derived from salient colour boundaries using multimodal neighbourhood signatures [13 -15,36]. Measures of colour coherence [16,29] within small neighbourhoods are employed to incorporate some spatial information when comparing images. These methods are being deployed in the 5th Framework project ARTISTE [17, 18, 20] aimed at automating the indexing and retrieval of the multimedia assets of European museums and Galleries. The MAVIS-2 project  uses quad trees and a simple grid to obtain spatial matching between image regions.
Much of the work in this field is guided by the need to implement perceptually based systems that emulate human vision and make the same similarity judgements as people. Texture and colour features together with rules for their use have been defined on the basis of subjective testing and applied to retrieval problems . At the same time research into computational perception is being applied to problems in image search [25,26]. Models of human visual attention are used to generate image saliency maps that identify important or anomalous objects in visual scenes [25,44]. Strategies for directing attention using fixed colour and corner measurements are devised to speed the search for target images . Although these methods achieve a great deal of success on many types of image the pre-defined feature measures and rules for applying them will preclude good search solutions in the general case.
The tracking of eye movements has been employed as a pointer and a replacement for a mouse , to vary the screen scrolling speed  and to assist disabled users . However, this work has concentrated upon replacing and extending existing computer interface mechanisms rather than creating a new form of interaction. Indeed the imprecise nature of saccades and fixation points has prevented these approaches from yielding benefits over conventional human interfaces.
Notions of pre-attentive vision [25,32-34] and visual similarity are very closely related. Both aspects of human vision are relevant to content-based image retrieval; attention mechanisms tell us what is eye-catching and important within an image, and visual similarity tells us what parts of an image match a different image.
A more recent development has yielded a powerful similarity measure . In this case the structure of a region in one image is being compared with random parts in a second image while seeking a match. This time if a match is found the score is increased, and a series of randomly generated features are applied to the same location in the second image that obtained the first match. A high scoring region in the second image is only reused while it continues to yield matches from randomly generated features and increases the similarity score. The conjecture that a region in the second image that shares a large number of different features with a region in the first image is perceptually similar is reasonable and appears to be the case in practice . The measure has been tested on trademark images and fingerprints and within certain limits shown to be tolerant of translation, rotation, scale change, blur, additive noise and distortion. This approach does not make use of a pre-defined distance metric plus feature space in which feature values are extracted from a query image and used to match those from database images, but instead generates features on a trial and error basis during the calculation of the similarity measure. This has the significant advantage that features that determine similarity can match whatever image property is important in a particular region whether it be a shape, a texture, a colour or a combination of all three. It means that effort is expended searching for the best feature for the region rather than expecting that a fixed feature set will perform optimally over the whole area of an image and over every image in the database. There are no necessary constraints on the pixel configurations used as features apart from the colour space and the size of the regions which is dependent in turn upon the definition of the original images.
More formally, in this method (full details of which are given in our International patent application WO 03/081523), a first image (or other pattern) is represented by a first ordered set of elements A each having a value and a second pattern is represented by a second such set. A comparison of the two involves performing, for each of a plurality of elements x of the first ordered set the steps of selecting from the first ordered set a plurality of elements x′ in the vicinity of the element x under consideration, selecting an element y of the second ordered set and comparing the elements x′ of the first ordered set with elements y of the second ordered set (each of which has the same position relative to the selected element y′ of the second ordered set as a respective one x of the selected plurality of elements of the first ordered set has relative to the element x under consideration). The comparison itself comprises comparing the value of each of the selected plurality of elements x′ of the first set with the value of the correspondingly positioned element y′ of the like plurality of elements of the second set in accordance with a predetermined match criterion to produce a decision that the plurality of elements of the first ordered set matches the plurality of elements of the second ordered set. The comparison is them repeated with a fresh selection of the plurality of elements x′ of the first set and/or a fresh selection of an element y of the second ordered set generating a similarity measure V as a function of the number of matches. Preferably, following a comparison resulting in a match decision, the next comparison is performed with a fresh selection of the plurality of elements x′ of the first set and the same selection of an element y of the second set.
According to the present exemplary embodiment there is provided a method of retrieval of stored images stored with metadata for at least some of the stored images, the metadata comprising at least one entry specifying
Other aspects of the exemplary embodiments are set out in the other claims.
Some exemplary embodiments of the invention will now be described, by way of example, with reference to the accompanying drawings, in which:
The apparatus shown in
The first method to be described, suitable for a small database, assumes that for each image stored in the database, the database already also contains one or more items of metadata each of which identifies a point or region of the image in question, another image, and a score indicating a degree of similarity between that point or region and the other image, For example a metadata item for an image frog.bmp might read:
meaning that the image frog.bmp has, at x, y coordinates 113, 42, a feature which shows a similarity score of 61 with the image toad.bmp. Further such items might indicate similarities of some other location within frog.bmp to toad.bmp, or similarities between frog.bmp and further images in the database.
The manner in which such metadata can be created will be described later; first, we will describe a retrieval process, with reference to the flowchart of
The retrieval process begins at Step 1 with the display of some initial images from the database. These could be chosen (1 a) by some conventional method (such as keywords) or (1 b) at random. At Step 2a “held image” counter is set to zero and, immediately the images are displayed, a timer defining a duration T is started (Step 3). During this time the user looks at the image and the system notes which of the images, and more particularly which parts of the images, the user finds to be of interest. This is done using the gaze tracker 10 which tracks the user's eye movement and records the position and duration of fixations (i.e., when the eye is not moving significantly). Its output takes the form of a sequence of reports each consisting of screen coordinates xs, ysand the duration t of fixation at this point.
The value of T may be quite small allowing only a few saccades to take place during each iteration. This will mean that the displayed image set A will be updated frequently, but the content may not change dramatically at each iteration. On the other hand a large value of T may lead to most of the displayed images being replaced.
In Step 4, these screen coordinates are translated into an identifier for the image looked at, and x, y coordinates within that image. Also (Step 5) if there are multiple reports with the same x, y the durations t for these are added so that a single total duration tg is available for each x, y reported. Some users may suffer from short eye movements that do not provide useful information and so a threshold F may be applied so that any report with t≦F is discarded.
The next stage is to use this information in combination with metadata for the displayed images in order to identity images in the database which have similarities with those parts of the displayed images that the user has shown interest in.
Thus at Step 6, for a displayed image a and an image b in the database, a level of interest Iab is calculated. For this purpose the user is considered to have been looking at a particular point if the reported position of his gaze is at, or within, some region centred on, the point on question. The size of this region will depend on the size of the user's fovea centralis and his viewing distance from the screen: this may if desired be calibrated, though satisfactory results can be obtained if a fixed size is assumed.
For a displayed image a and an image b in the database, a level of interest Iab is calculated as follows:
where tg is the total fixation duration at position xg, yg (g=1, . . . , G) and G is the number of total durations. Sabi is the score contained in the metadata for image a indicating a similarity between point xi, yi in image a and another image b, and there are I items of metadata in respect of image a and specifying the same image b. Naturally, if, for any pair a, b, there is no metadata entry for Sabi, Sabi is deemed to be zero. And δ(xg, yg, xi, yi) is 1 if xg, yg is within the permitted region centred on xi, yi and zero otherwise. For a circular area, δ=1 if and only if
(xg-xi)2+(yg-yi)2<r2 where r is assumed effective radius of the fixation area. Obviously Iab exists only for those images b for which values of Sabi are present in the metadata for one or more of the displayed images a.
The next (Step 7) is to obtain a score Ib for such images, namely
summed over all the displayed images a.
Also in Step 7, the images with the highest values of Ib are retrieved from the database and displayed. The number of images that are displayed may be fixed, or, as shown may depend on the number of images already held (see below).
Thus, if the number of images held is M and the number of images that are allowed to be displayed is N (assumed fixed) then the N-M highest scoring images will be chosen. The display is then updated by removing all the existing displayed images (other than held ones) and displaying the chosen images B instead. The images now displayed then become the new images A for a further iteration.
At Step 8 the user is given the option to hold any or all (thereby stopping the search) of the images currently displayed and prevent them from being overwritten in subsequent displays. The user is also free to release images previously held. The hold and release operations may be performed by a mouse click, for example. The value of M is correspondingly updated.
In Step 9 the user is able to bar displayed images from being subsequently included in set B and not being considered in the search from that point. It is common for image databases to contain many very similar images, some even being cropped versions of each other, and although these clusters may be near to a user's requirements, they should not be allowed to block a search from seeking better material. This operation may be carried out by means of a mouse click, for example.
The user is able to halt the search in Step 10 simply by holding all the images on the screen however, other mechanisms for stopping the search may be employed.
It should be noted that the user is able to invoke Steps 8 or 9 at any time in the process after Step 2. This could be a mouse click or a screen touch and may be carried out at the same time as continuing to gaze at the displayed images.
The invention does not presuppose the use of any particular method for generating the metadata for the images in the database. Indeed, it could in principle be generated manually. In general this will be practicable only for very small databases, though in some circumstances it may be desirable to generate manual entries in addition to automatically generated metadata.
We prefer to use the method described in our earlier patent application referred to above.
For a small database, it is possible to perform comparisons for every possible pair of images in the database, but for larger databases this is not practicable. For example if a database has 10,000 images this would require 108 comparisons.
Thus, in an enhanced version, the images in the database are clustered; that is, certain images are designated as vantage images, and each cluster consists of a vantage image and a number of other images. It is assumed that this clustering is performed manually by the person loading images into the database. For example if he is to load a number of images of horses, he might choose one representative image as the vantage image and mark others as belonging to the cluster. Note that an image may if desired belong to more than one cluster.
The process of generating metadata is then facilitated:
The possibility however of other links being also generated is not excluded. In particular, once a database has been initially set up in this way one could if desired make farther comparisons between images, possibly at random, to generate more metadata, so that as time goes on more and more links between images are established.
In the above-described retrieval method it was assumed that the initial images were retrieved at random or by some conventional retrieval method. A better option is to allow the user to input his own images to start the search (Step 1c). In this case, before retrieval can commence it is necessary to set up metadata for these external starting images. This is done by running the set-up method to compare (Step 1d) each of these starting images with all the images in the database (or, in a large database all the vantage images). In this way the starting images (temporarily at least) effectively become part of the database and the method then proceeds in the manner previously described.
The “level of interest” is defined above as being formed from the products of the durations tg and the scores S; however other monotonic functions may be used. The set-up method (and hence also the retrieval method) described earlier assumes that a metadata entry refers to a particular point within the image. Alternatively, the scoring method might be modified to perform some clustering of points so that an item of metadata, instead of stating that a point (x, y) in A has a certain similarity to B, states that a region of specified size and shape, at (x, y) in A, has a certain similarity to B. One method of doing this, which assumes a square area of fixed size 2Δ+1×2Δ+1, is as follows. Starting with the point scores S(x, y):
Then S1 are stored in the metadata instead of S. The retrieval method proceeds as before except that (apart from the use of S1 rather than S) the function δ is redefined as being 1 whenever the gaze point xg yg falls within the square area or within a distance r of its boundary.
If areas of variable size and/or shape are to be permitted then naturally the metadata would include a definition of the size and shape and the function δ modified accordingly.
In the interests of avoiding delays, during Steps 2 to 6, all ‘other’ images referenced by the metadata of the displayed image could be retrieved from the database and cached locally.
Note that the use of a gaze tracker is not essential; user input by means of a pointing device such as a mouse could-be used instead, though the gaze tracker option is considered to be much easier to use.
During the process of image retrieval users can traverse a sequence of images that are selected by the user from those presented by the computer. The machine endeavours to predict the most relevant groups of images and the user selects on the basis of recognised associations with a real or imagined target image. The retrieval will be successful if the images presented to the user are on the basis of the same associations that the user also recognises. Such associations might depend upon semantic or visual factors which can take virtually unlimited forms often dependent upon the individual user's previous experience and interests. This system makes provision for the incorporation of semantic links between images derived from existing or manually captured textual metadata.
The process of determining the similarity score between two images necessarily identifies a correspondence between regions that give rise to large contributions towards the overall image similarity. A set of links between image locations together with values of their strengths is then available to a subsequent search through images that are linked in this way. There may be several such links between regions in pairs of images, and further multiple links to regions in other images in the database. This network of associations is more general than those used in other content-based image retrieval systems which commonly impose a tree structure on the data, and cluster images on the basis of symmetrical distance measures between images [27,37]. Such restrictions prevent associations between images being offered to users that are not already present in the fixed hierarchy of clusters. It should be noted that the links in this system are not symmetric as there is no necessary reason for a region that is linked to a second to be linked in the reverse direction. The region in the second image may be more similar to a different region in the first image. The triangle inequality is not valid as it is quite possible for image A to be very similar to B, and B to C, but A can be very different from C. Other approaches preclude solutions by imposing metrics that are symmetric and/or satisfy the triangle inequality .
This new approach to content-based image retrieval will allow a large number of pre-computed similarity associations between regions within different images to be incorporated into a novel image retrieval system. In large databases it will not be possible to compare all images with each other so clusters and vantage images [37,38,39] will be employed to minimise computational demands. However, as users traverse the database fresh links will be continually generated and stored that may be used for subsequent searches and reduce the reliance upon vantage images. The architecture will be capable of incorporating extra links derived from semantic information  that already exists or which can be captured manually.
It is not natural to use a keyboard or a mouse when carrying out purely visual tasks and presents a barrier to many users. Eyetracking technology has now reached a level of performance that can be considered as an interface for image retrieval that is intuitive and rapid. If it is assumed that users fixate on image regions that attract their interest, this information may be used to provide a series of similar images that will converge upon the target or an image that meets the users' demands. Of course a mouse could be used for the same task, but has less potential for extremely rapid and intuitive access. Users would be free to browse in an open-ended manner or to seek a target image by just gazing at images and gaining impressions, but in so doing driving the search by means of saccades and fixation points. Similarity links between image regions together with corresponding strength values would provide the necessary framework for such a system which would be the first of its kind in the world.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US4646352||Jun 28, 1983||Feb 24, 1987||Nec Corporation||Method and device for matching fingerprints with precise minutia pairs selected from coarse pairs|
|US5113454||Aug 19, 1988||May 12, 1992||Kajaani Electronics Ltd.||Formation testing with digital image analysis|
|US5200820||Apr 26, 1991||Apr 6, 1993||Bell Communications Research, Inc.||Block-matching motion estimator for video coder|
|US5303885||Dec 14, 1992||Apr 19, 1994||Wade Lionel T||Adjustable pipe hanger|
|US5790413||Dec 9, 1994||Aug 4, 1998||Exxon Chemical Patents Inc.||Plant parameter detection by monitoring of power spectral densities|
|US5825016||Mar 6, 1996||Oct 20, 1998||Minolta Co., Ltd.||Focus detection device and accompanying optical equipment|
|US5867813||May 1, 1995||Feb 2, 1999||Ascom Infrasys Ag.||Method and apparatus for automatically and reproducibly rating the transmission quality of a speech transmission system|
|US5978027||Jul 29, 1997||Nov 2, 1999||Canon Kabushiki Kaisha||Image pickup apparatus having sharpness control|
|US6094507||Mar 17, 1998||Jul 25, 2000||Nec Corporation||Figure location detecting system|
|US6111984||Jan 21, 1998||Aug 29, 2000||Fujitsu Limited||Method for matching input image with reference image, apparatus for the same, and storage medium storing program for implementing the method|
|US6240208||Aug 5, 1998||May 29, 2001||Cognex Corporation||Method for automatic visual identification of a reference site in an image|
|US6266676||Jul 21, 2000||Jul 24, 2001||Hitachi, Ltd.||Link information management method|
|US6282317||Dec 31, 1998||Aug 28, 2001||Eastman Kodak Company||Method for automatic determination of main subjects in photographic images|
|US6304298||Sep 9, 1996||Oct 16, 2001||Orad Hi Tec Systems Limited||Method and apparatus for determining the position of a TV camera for use in a virtual studio|
|US6389417||Jun 29, 1999||May 14, 2002||Samsung Electronics Co., Ltd.||Method and apparatus for searching a digital image|
|US6608615 *||Sep 19, 2000||Aug 19, 2003||Intel Corporation||Passive gaze-driven browsing|
|US6778699||Sep 29, 2000||Aug 17, 2004||Eastman Kodak Company||Method of determining vanishing point location from an image|
|US6934415||Oct 16, 2001||Aug 23, 2005||British Telecommunications Public Limited Company||Visual attention system|
|US7046924 *||Nov 25, 2002||May 16, 2006||Eastman Kodak Company||Method and computer program product for determining an area of importance in an image using eye monitoring information|
|US7076118 *||Dec 5, 1997||Jul 11, 2006||Sharp Laboratories Of America, Inc.||Document classification system|
|US7327890 *||Dec 20, 2002||Feb 5, 2008||Eastman Kodak Company||Imaging method and system for determining an area of importance in an archival image|
|US20010013895||Feb 1, 2001||Aug 16, 2001||Kiyoharu Aizawa||Arbitrarily focused image synthesizing apparatus and multi-image simultaneous capturing camera for use therein|
|US20020081033||Oct 16, 2001||Jun 27, 2002||Stentiford Frederick W.M.||Visual attention system|
|US20020126891||Jan 17, 2001||Sep 12, 2002||Osberger Wilfried M.||Visual attention model|
|US20040120606 *||Dec 20, 2002||Jun 24, 2004||Eastman Kodak Company||Imaging method and system for determining an area of importance in an archival image|
|US20050031178||Sep 13, 2004||Feb 10, 2005||Biodiscovery, Inc.||System and method for automatically identifying sub-grids in a microarray|
|US20050074806||Oct 1, 2004||Apr 7, 2005||Genset, S.A.||Methods of genetic cluster analysis and uses thereof|
|US20050169535||Mar 21, 2003||Aug 4, 2005||Stentiford Frederick W.M.||Comparing patterns|
|US20060050993||Nov 24, 2003||Mar 9, 2006||Stentiford Frederick W||Searching images|
|EP0098152A2||Jun 28, 1983||Jan 11, 1984||Nec Corporation||Method and device for matching fingerprints|
|EP1126411A1||Feb 17, 2000||Aug 22, 2001||BRITISH TELECOMMUNICATIONS public limited company||Visual attention location system|
|EP1286539A1||Aug 23, 2001||Feb 26, 2003||BRITISH TELECOMMUNICATIONS public limited company||Camera control|
|GB1417721A||Title not available|
|JP2000207420A||Title not available|
|JP2002050066A||Title not available|
|JP2003187217A||Title not available|
|JPH03238533A||Title not available|
|JPH10260773A||Title not available|
|WO1982001434A1||Oct 20, 1981||Apr 29, 1982||Rockwell International Corp||Fingerprint minutiae matcher|
|WO1990003012A2||Sep 6, 1989||Mar 22, 1990||Harry James Etherington||Image recognition|
|WO1999005639A1||Jul 24, 1998||Feb 4, 1999||Arch Dev Corp||Wavelet snake technique for discrimination of nodules and false positives in digital radiographs|
|WO1999060517A1||May 18, 1999||Nov 25, 1999||Datacube Inc||Image recognition and correlation system|
|WO2000033569A1||Nov 24, 1999||Jun 8, 2000||Iriscan Inc||Fast focus assessment system and method for imaging|
|WO2001031638A1||Oct 24, 2000||May 3, 2001||Ericsson Telefon Ab L M||Handling variable delay in objective speech quality assessment|
|WO2001061648A2||Feb 8, 2001||Aug 23, 2001||British Telecomm||Visual attention location system|
|WO2002021446A1||Aug 22, 2001||Mar 14, 2002||British Telecomm||Analysing a moving image|
|WO2002098137A1||Jun 1, 2001||Dec 5, 2002||Univ Nanyang||A block motion estimation method|
|WO2003081523A1||Mar 21, 2003||Oct 2, 2003||British Telecomm||Comparing patterns|
|WO2003081577A1||Mar 24, 2003||Oct 2, 2003||British Telecomm||Anomaly recognition method for data streams|
|WO2004042645A1||Oct 24, 2003||May 21, 2004||Juergen Bueckner||Method, device and computer program for detecting point correspondences in sets of points|
|WO2004057493A2||Nov 24, 2003||Jul 8, 2004||British Telecomm||Searching images|
|WO2005057490A2||Dec 1, 2004||Jun 23, 2005||British Telecomm||Digital image enhancement|
|WO2006030173A1||Aug 26, 2005||Mar 23, 2006||British Telecomm||Analysis of patterns|
|1||Almansa et al., "Vanishing Point Detection Without Any A Priori Information", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, No. 4, Apr. 2003, pp. 502-507.|
|2||Bradley et al., "JPEG 2000 and Region of Interest Coding", Digital Imaging Computing-Techniques and Applications, Melbourne, Australia, Jan. 21-22, 2002.|
|3||Bradley et al., "Visual Attention for Region of Interest Coding in JPEG 2000", Journal of Visual Communication and Image Representation, vol. 14, pp. 232-250, 2003.|
|4||Brown, A Survey of Image Registration Techniques, ACM Computing Surveys, vol. 24, No. 4, Dec. 1992, pp. 325-376.|
|5||Buhmann et al., "Dithered Colour Quantisation", Eurographics 98, Sep. 1998, http://opus.fu-bs.de/opus/volltexte/2004/593/pdf/TR-tubs-cq-1998-01.pdf.|
|6||Cantoni et al., "Vanishing Point Detection: Representation Analysis and New Approaches", 11th Int. Conf. on Image Analysis and Processing, Palermo, Italy, Sep. 26-28, 2001.|
|7||Chang et al., "Fast Algorithm for Point Pattern Matching: Invariant to Translations, Rotations and Scale Changes", Pattern Recognition, vol. 30, No. 2, Feb. 1997, pp. 311-320.|
|8||Curtis et al., "Metadata-The Key to Content Management Services", 3rd IEEE Metadata Conference, Apr. 6-7, 1999.|
|9||European Search Report dated Jan. 8, 2003 for RS 018248 GB.|
|10||European Search Report dated Jan. 8, 2003 for RS 108250 GB.|
|11||European Search Report dated Jan. 9, 2003 for RS 108249 GB.|
|12||European Search Report dated Jan. 9, 2003 for RS 108251 GB.|
|13||Finlayson et al., "Illuminant and Device Invariant Colour Using Histogram Equalisation", Pattern Recognition, vol. 38, No. 2 (Feb. 2005), pp. 179-190.|
|14||Gallet et al., "A Model of the Visual Attention to Speed up Image Analysis", Proceedings of the 1998 IEEE International Conference on Image Processing (ICIP-98), Chicago, Illinois, Oct. 4-7, 1998, IEEE Computer Society, 1998, ISBAN-08186-8821-1, vol. 1, pp. 246-250.|
|15||International Search Report dated Jun. 12, 2003.|
|16||International Search Report dated Mar. 18, 2002.|
|17||International Search Report mailed Feb. 9, 2006 in International Application No. PCT/GB2005/003339.|
|18||Itti et al., "Short Papers: A Model of Saliency-Based Visual Attention for Rapid Scene Analysis", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, No. 11, Nov. 1998, pp. 1254-1259.|
|19||Koizumi et al., "A New Optical Detector for a High-Speed AF Control", 1996 IEEE, pp. 1055-1061.|
|20||Lutton et al., "Contribution to the Determination of Vanishing Points Using Hough Transform", 1994 IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, No. 4, Apr. 1994, pp. 430-438.|
|21||M. E. J. Wood, N. W. Campbell and B. T. Thomas, "Iterative Refinement by Relevance Feedback in Content-Based Digital Image Retrieval," Proceedings of the Sixth ACM International Conference on Multimedia, Sep. 12, 1998, pp. 13-20.|
|22||Mahlmeister et al., "Sample-guided Progressive Image Coding", Proc. Fourteenth Int. Conference on Pattern Recognition, Aug. 16-20, 1998, pp. 1257-1259, vol. 2.|
|23||McLean et al., "Vanishing Point Detection by Line Clustering", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 17, No. 11, Nov. 1995, pp. 1090-1095.|
|24||Office Action dated Mar. 4, 2008 in JP Patent Application No. 2004-561596 with English translation.|
|25||Office Action dated Oct. 27, 2006 in EP 03 778 509.4-2201.|
|26||Okabe et al., Object Recognition Based on Photometric Alignment Using Ransac, Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'03), vol. 2, pp. 221-228, Jun. 19-20, 2003.|
|27||Osberger et al., "Automatic Identification of Perceptually Important Regions in an Image", Proc. Fourteenth Int. Conference on Pattern Recognition, Aug. 16-20, 1998, pp. 701-704, vol. 1.|
|28||Ouerhani et al., "Adaptive Color Image Compression Based on Visual Attention", Proc. 11th Int. Conference on Image Analysis and Processing, Sep. 26-28, 2001, pp. 416-421.|
|29||Oyekoya et al., "Exploring Human Eye Behaviour Using a Model of Visual Attention", International Conference on Pattern Recognition 2004, Cambridge, Aug. 23-26, 2004, pp. 945-948.|
|30||Privitera et al., "Algorithms for Defining Visual Regions-of-Interest: Comparison with Eye Fixations", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, No. 9, Sep. 2000, pp. 970-982.|
|31||Raeth et al., "Finding Events Automatically In Continuously Sampled Data Streams Via Anomaly Detection", Proceedings of the IEEE 2000 National Aerospace and Electronics Conference, Naecon, Oct. 10-12, 2000, pp. 580-587.|
|32||Rasmussen, "Texture-Based Vanishing Point Voting for Road Shape Estimation", British Machine Vision Conference, Kingston, UK, Sep. 2004, http://www.bmva.ac.uk/bmvc/2004/papers/paper-261.pdf.|
|33||Roach et al., "Recent Trends in Video Analysis: A Taxonomy of Video Classification Problems", 6th Iasted Int. Conf. on Internet and Multimedia Systems and Applications, Hawaii, Aug. 12-14, 2002, pp. 348-353.|
|34||Rohwer et al., "The Theoretical and Experimental Status of the n-Tuple Classifier", Neural Networks, vol. 11, No. 1, pp. 1-14, 1998.|
|35||Rother, "A New Approach for Vanishing Point Detection in Architectural Environments", 11TH British Machine Vision Conference, Bristol, UK, Sep. 2000, http://www.bmya.ac.uk/bmvc/2000/papers/p39.pdf.|
|36||Rui et al., "A Relevance Feedback Architecture for Content-Based Multimedia Information Retrieval Systems", 1997 IEEE, pp. 82-89.|
|37||Rui et al., "Relevance Feedback: A Power Tool for Interactive Content-Based Image Retrieval", IEEE Transactions on Circuits and Systems for Video Technology, vol. 8, No. 5, Sep. 1998, pp. 644-655.|
|38||Russ et al., "Smart Realisation: Delivering Content Smartly", J. Inst. BT Engineers, vol. 2, Part 4, pp. 12-17, Oct.-Dec. 2001.|
|39||Santini et al., "Similarity Matching", Proc 2nd Asian Conf on Computer Vision, pp. II 544-548, IEEE, 1995.|
|40||Sebastian et al., "Recognition of Shapes by Editing Shock Graphs", Proc. ICCV 2001, pp. 755-762.|
|41||Shufelt, "Performance Evaluation and Analysis of Vanishing Point Detection Techniques", In Analysis and Machine Intelligence, vol. 21, No. 3, Mar. 1999, pp. 282-288.|
|42||Smeulders et al., "Content-Based Image Retrieval at the End of the Early Years", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, No. 12, Dec. 2000, pp. 1349-1380.|
|43||Stentiford et al., "An Evolutionary Approach to the Concept of Randomness", The Computer Journal, pp. 148-151, Mar. 1972.|
|44||Stentiford et al., "Automatic Identification of Regions of Interest with Application to the Quantification of DNA Damage in Cells", Human Vision and Electronic Imaging VII, B.E. Rogowitz, T.N. Pappas, Editors, Proc. SPIE vol. 4662, pp. 244-253, San Jose, Jan. 20-26, 2002.|
|45||Stentiford, "A Visual Attention Estimator Applied to Image Subject Enhancement and Colour and Grey Level Compression", International Conference on Pattern Recognition 2004, Cambridge, Aug. 23-26, 2004, pp. 638-641.|
|46||Stentiford, "An Attention Based Similarity Measure for Fingerprint Retrieval", Proc. 4th European Workshop on Image Analysis for Multimedia Interactive Services, pp. 27-30, London, Apr. 9-11, 2003.|
|47||Stentiford, "An Attention Based Similarity Measure with Application to Content-Based Information Retrieval", Storage and Retrieval for Media Databases 2003, M.M. Yeung, R.W. Leinhart, C-S Li, Editors, Proc SPIE vol. 5021, Jan. 20-24, Santa Clara, 2003.|
|48||Stentiford, "An Estimator for Visual Attention Through Competitive Novelty with Application to Image Compression", Picture Coding Symposium 2001, Apr. 25-27, 2001, Seoul, Korea, pp. 101-104, http://www.ee.ucl.ac.uk/-fstentif/PCS2001-pdf.|
|49||Stentiford, "An Evolutionary Programming Approach to the Simulation of Visual Attention", Congress on Evolutionary Computation, Seoul, May 27-30, 2001, pp. 851-858.|
|50||Stentiford, "Attention Based Facial Symmetry Detection", International Conference on Advances in Pattern Recognition, Bath, UK, Aug. 22-25, 2005.|
|51||Stentiford, "Attention Based Symmetry Detection in Colour Images", IEEE International Workshop on Multimedia Signal Processing, Shanghai, China, Oct. 30-Nov. 2, 2005.|
|52||Stentiford, "Evolution: The Best Possible Search Algorithm?", BT Technology Journal, vol. 18, No. 1, Jan. 2000 (Movie Version).|
|53||Stentiford, "The Measurement of the Salience of Targets and Distractors through Competitive Novelty", 26th European Conference on Visual Perception, Paris, Sep. 1-5, 2003, (Poster).|
|54||Vailaya et al., "Image Classification for Content-Based Indexing", IEEE Transactions on Image Processing, vol. 10, No. 1, Jan. 2001, pp. 117-130.|
|55||Walker et al., "Locating Salient Facial Features Using Image Invariants", Proc. 3rd IEEE International Conference on Automatic Face and Gesture Recognition, 1998, pp. 242-247.|
|56||Wang et al., "Efficient Method for Multiscale Small Target Detection from a Natural Scene", 1996 Society of Photo-Optical Instrumentation Engineers, Mar. 1996, pp. 761-768.|
|57||Wixson, "Detecting Salient Motion by Accumulating Directionally-Consistent Flow", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, No. 8, Aug. 2000, pp. 774-780.|
|58||Xu et al., "Video Summarization and Semantics Editing Tools", Storage and Retrieval for Media Databases, Proc. SPIE, vol. 4315, San Jose, Jan. 21-26, 2001.|
|59||Zhao et al., "Face Recognition: A Literature Survey", CVLK Technical Report, University of Maryland, Oct. 2000, ftp://ftp.cfar.umd.edu/TRs/CVL-Reports-2000/TR4167-zhao.ps.qz.|
|60||Zhao et al., "Morphology on Detection of Calcifications in Mammograms", 1992 IEEE, pp. III-129-III-132.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US8086502||Mar 31, 2008||Dec 27, 2011||Ebay Inc.||Method and system for mobile publication|
|US8312374 *||Jul 24, 2009||Nov 13, 2012||Sony Corporation||Information processing apparatus and method and computer program|
|US8321293||Jul 1, 2011||Nov 27, 2012||Ebay Inc.||Systems and methods for marketplace listings using a camera enabled mobile device|
|US8472691 *||Mar 18, 2010||Jun 25, 2013||Brainlab Ag||Method for ascertaining the position of a structure in a body|
|US8521609||Nov 26, 2012||Aug 27, 2013||Ebay Inc.||Systems and methods for marketplace listings using a camera enabled mobile device|
|US8542908 *||May 11, 2008||Sep 24, 2013||Yeda Research & Development Co. Ltd.||Bidirectional similarity of signals|
|US8825660 *||Mar 17, 2009||Sep 2, 2014||Ebay Inc.||Image-based indexing in a network-based marketplace|
|US8827710||May 19, 2011||Sep 9, 2014||Microsoft Corporation||Realtime user guidance for freehand drawing|
|US8994834||Dec 20, 2012||Mar 31, 2015||Google Inc.||Capturing photos|
|US9092700||Dec 10, 2012||Jul 28, 2015||Canon Kabushiki Kaisha||Method, system and apparatus for determining a subject and a distractor in an image|
|US20100177955 *||May 11, 2008||Jul 15, 2010||Denis Simakov||Bidirectional similarity of signals|
|US20100239152 *||Mar 18, 2010||Sep 23, 2010||Furst Armin||Method for ascertaining the position of a structure in a body|
|US20110087659 *||Dec 21, 2009||Apr 14, 2011||Prasenjit Dey||Document relevance determining method and computer program|
|US20130101209 *||Dec 14, 2012||Apr 25, 2013||Peking University||Method and system for extraction and association of object of interest in video|
|U.S. Classification||382/305, 382/173, 707/999.001, 358/403, 382/117|
|International Classification||G06F17/30, H04N1/00, G06K9/54, G06F7/00, G06K9/00, G06K9/60, G06K9/34|
|Cooperative Classification||Y10S707/99931, G06F17/30265|
|Jun 3, 2005||AS||Assignment|
Owner name: BRITISH TELECOMMUNICATIONS PUBLIC LIMITED COMPANY,
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:STENTIFORD, FREDERICK WARWICK MICHAEL;REEL/FRAME:017192/0221
Effective date: 20040121
|Mar 14, 2013||FPAY||Fee payment|
Year of fee payment: 4