WO2003075221A1 - Mechanism for unsupervised clustering - Google Patents

Mechanism for unsupervised clustering Download PDF

Info

Publication number
WO2003075221A1
WO2003075221A1 PCT/FI2003/000152 FI0300152W WO03075221A1 WO 2003075221 A1 WO2003075221 A1 WO 2003075221A1 FI 0300152 W FI0300152 W FI 0300152W WO 03075221 A1 WO03075221 A1 WO 03075221A1
Authority
WO
WIPO (PCT)
Prior art keywords
weight
weight vector
coefficient
cluster
lattice
Prior art date
Application number
PCT/FI2003/000152
Other languages
French (fr)
Inventor
Adrian Flanagan
Original Assignee
Nokia Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Corporation filed Critical Nokia Corporation
Priority to AU2003208735A priority Critical patent/AU2003208735A1/en
Priority to US10/506,634 priority patent/US7809726B2/en
Publication of WO2003075221A1 publication Critical patent/WO2003075221A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions

Definitions

  • the invention relates to clustering techniques that are generally used to classify input data into groups or clusters without prior knowledge of those clusters. More particularly, the invention relates to methods and apparatus for automatically determining cluster centres.
  • An example of such clustering techniques is a Self-Organizing Map, originally invented by Teuvo Koho- nen.
  • the SOM concept is well documented, and a representative example of an SOM application is disclosed in US patent 6260036.
  • the current framework under investigation for describing and analyzing a context has a critical component based on the clustering of data. This clustering is expected to appear at every stage of context computation, from the processing of raw input signals to the determination of a higher order context. Clustering has been well studied over many years and many different ap- proaches to the problem exist.
  • One of the main problems is knowing how many clusters exist in the data.
  • Techniques exist to estimate the number of clusters in a data set however the methods either require some form of a priori information or assumptions on the data, or they estimate the number of clusters on the basis of an analysis of the data, which may require storing the data, and be computationally demanding. None of these approaches seems entirely suitable for an on-line, unsupervised cluster analysis in a system with limited resources, as would be the case for a context-aware mobile terminal.
  • Clustering is an important part of any data analysis or information processing problem. The idea is to divide a data set into meaningful subsets so that points in any subset are closely related to each other and not to points in other subsets. The definition of 'related' may be as simple as the distance between the points. Many different approaches and techniques can be applied to achieve this goal. Each approach has its own assumptions and advantages and disadvantages.
  • One of the best-known methods from the partition-based clustering class is the K-means algorithm, which tries to adaptively position K 'centres' that minimize the distance between the input data vectors and the centres.
  • K-means algorithm which tries to adaptively position K 'centres' that minimize the distance between the input data vectors and the centres.
  • One of its disadvantages is that the number of the K centres must be specified before the clustering is attempted. In the case of an unknown data set this may not always be possible.
  • the algorithm can be run several times with different values of K and the optimum K is chosen on the basis of some criteria. For an on
  • An object of the invention is to provide a method and an apparatus for implementing the method so as to alleviate the above disadvantages.
  • the object of the invention is to provide a method for automatically determining cluster centres, such that the method is easily implemented in a computer system.
  • a computer-implemented method can be implemented by the following steps: initializing a first data structure that comprises a lattice structure of weight vectors that create an approximate representation of a plurality of input data points; performing a first iterative process for iteratively updating the weight vectors such that they move toward cluster centres; performing a second iterative process for iteratively updating a second data structure utilizing the results of the iterative updating of the first data structure; and determining, on the basis of the second data structure, the weight vectors that correspond to the cluster centres of the input data points.
  • a preferred embodiment of the invention is based on the following idea.
  • Self-organizing maps generally use a lattice structure of nodes, and a weight vector is associated with each node.
  • Each data point in the input data is iteratively compared with each weight vector of the lattice, and the node whose weight vector best approximates the data point is chosen as the winner for that data point and iteration.
  • the weight vectors associated with each node of the lattice are adjusted.
  • the adjustment made to each node's weight vector is dependent on the winning node through a neighbourhood function. Following the adjustment of the weight vectors a next iteration step is taken.
  • the term 'neighbourhood function' is a function of distance on the lattice between the winning node and the node being updated such that the value of the function generally decreases as the dis- tance increases. With normalized SOMs, the value of the function is one for a distance of zero.
  • a common form for the neighbourhood function is Gaussian, but preferred embodiments of the invention make use of neighbourhood functions that are not strictly Gaussian.
  • a second iterative process is run, and the second iterative process gives a numerical value for the lattice nodes such that the numerical value increases if the node's weight vector is positioned at a cluster centre. Then the cluster centres are determined, not on the basis of the weight vectors, but on the basis of the numerical values produced by the second itera- tive process.
  • An advantage of the invention is that it is much easier for machines to locate local maxima in the numerical values than to locate cluster centres in the clustering mechanism wherein the cluster centres are the location in which the density of the weight vectors is highest.
  • the second data structure comprises a coefficient for each of the weight vectors in the lattice struc- ture.
  • Each iteration in the first iterative process comprises selecting a winner weight vector for each of the data points on the basis of a distance measure between the input data point and the weight vector.
  • Each iteration in the second iterative process comprises calculating a next value of each coefficient on the basis of the current value of the coefficient; and a combination of: 1 ) the current coefficient of the winner weight vector, 2) a second neighbourhood function that approaches zero as the distance on the lattice structure between the weight vector and the winner weight vector increases, and 3) an adjustment factor for adjusting convergence speed between iterations.
  • the combination referred to above can be a simple multiplication. If the second neighbourhood function is selected appropriately, such that the second data structure has distinct borders, the step of determining the weight vectors can be accomplished simply by selecting local maxima in the second data structure.
  • a preferred version of the second neighbourhood function is not monotonous, but gives negative values at some distances. Also, the second neighbourhood function is preferably made more pronounced over time as the number of prior iterations increases.
  • the first data structure is or comprises a self-organizing map and the input data points represent real-world quantities.
  • Figure 1 illustrates a self-organizing map (SOM) with six clusters of input data points
  • Figure 2 shows a typical form of a neighbourhood function used in an SOM algorithm
  • Figure 3 shows a 15 by 15 lattice structure resulting from uniformly distributed input data
  • Figure 4 shows a probabilistic map for visualizing the cluster centres in an SOM with six cluster centres
  • Figure 5 shows a preferred form of the second neighbourhood function used in the second iterative process according to a preferred embodiment of the invention
  • Figure 6 shows a computer pseudocode listing for generating the function shown in Figure 5.
  • Figure 7 shows a coefficient map that visualizes the data structure used for locating the cluster centres in the SOM.
  • Figure 8 is a flow chart illustrating a method according to the invention wherein the method comprises two iterative processes run in tandem;
  • Figures 9 and 10 show an SOM map and a coefficient map, respectively, with five clusters;
  • Figures 11 and 12 show an SOM and a coefficient map, respectively, for an exceptional distribution of input data
  • Figure 13 shows an example of a neighbourhood function used in an automatic cluster-labelling algorithm
  • Figure 14 shows the result of the automatic cluster-labelling algorithm
  • Figure 15 shows how the automatic cluster-labelling algorithm can be integrated with a cluster-determination algorithm according to the invention.
  • An SOM is a learning algorithm or mechanism from the area of Artificial Neural Networks (ANNs) that find wide application in the area of vector quantization, unsupervised clustering and supervised classification. Reasons for its widespread use include its robustness, even for data sets of very different and even unknown origin, as well as the simplicity of its implementation.
  • the SOM uses a set of weight vectors to form a quantized, topology-preserving mapping, of the input space. The distribution of the weights reflects the probability distribution of the input.
  • the SOM representation is used in clustering applications generally by clustering the weight vectors after training, using for example the K-means algorithm.
  • the problem of the original K-means algorithm still remains, that is, determining the value of K for the number of centres.
  • a method based on the SOM algorithm which can be used to automatically determine cluster centres in an unsupervised manner.
  • the number of clusters does not have to be predefined and groups of adjacent SOM weight vectors represent the cluster centres.
  • the cluster is represented by a set of centres which correspond to weight vectors in the SOM.
  • the algorithm requires few additional computational resources and makes direct use of the information generated during the learning of the SOM. It is already clear why the algorithm can be considered a hybrid of the K-means algorithm and a method based on a probabilistic mixture model.
  • Each cluster is represented by a set of centres, which correspond to a set of weights in the SOM.
  • the SOM weights form an approximation of the probability distribution of the input.
  • Figure 1 shows a self-organizing map 10. More particularly, Figure 1 shows an online version of SOM. There also exists a "batch SOM" but this requires storing all the input points and going through all of them several times, which is why the online-version is preferred here.
  • Reference numerals 11 generally denote input data points that in most SOM applications represent real- world events or quantities.
  • the SOM itself consists of a total of N weight vectors X R m distributed on an n-dimensional lattice. Thus there are two associated dimensions, a dimension m of the input data and the weight vectors, and a dimension n of the lattice.
  • the reason for having a lattice is to be able to define neighbourhood relations between adjacent weights. For example, if each weight k has an associated position vector i k e f on the lattice, then a distance d (i i k ) between weights k and / on the lattice can be defined.
  • the initial values of the weight vectors can be randomly chosen, as the convergence of the algorithm is independent of the initial conditions.
  • a distance d( ⁇ (t), X k ) between an input ⁇ (t) at time t and each of the weight vectors X k is calculated.
  • each weight vector is updated as:
  • X k (t+1) X k (t) + ⁇ (t)h(£, v(t))( ⁇ (t) - X k (t)) [2]
  • ⁇ (t) e [0,1] is a decreasing function of time which determines the learning rate of the SOM
  • Figure 2 shows a typical form 21 of a neighbourhood function h used in the SOM algorithm.
  • the SOM map is built iteratively. In other words, the steps in equations [1] and [2] are repeated for each t. For uniformly distributed input data and a two-dimensional lattice with 15 x 15 weights, the SOM algorithm gives the result shown in Figure 3.
  • Figure 3 shows a plot of the weight values on a support [0,1] ⁇ [0,1] of the input, where adjacent weights on the lattice are connected by a line segment in the plot. None of the line segments intersect, which indicates that the weights are organized.
  • the weight values are spread uniformly over the input space and approximate the uniform distribution of the input data. To arrive at such a result the learning is divided into two phases. In the first phase, there is a large ⁇ and a, and the algorithm is in the self-organizing phase where the weights become organized. The second phase is the convergence phase where the neighbourhood width is reduced to zero as a increases and approaches zero. The weight vectors converge to the approximation of the probability distribution of the input.
  • Figure 1 shows data points generated from six different normal distributions with different means and variances, and plotted on this there are the weight vectors from the SOM when data from this distribution was used as an input. It is seen that the weight vectors are organized and are more concentrated at the centres of the clusters. Hence the approximation of the probability distribution of the input.
  • Reference numerals 13 ⁇ to 13 ⁇ denote six circles located at the cluster centres. The circles are not normally part of the SOM. While it is easy for humans to determine where the cluster centres are, this task is surprisingly difficult for computers.
  • Figure 4 shows a plot of the calculated probability of each weight vector being chosen as a winner during the simulation.
  • the probability for each weight vector is determined by keeping a record of the number of times the weight vector was chosen as a winner and then dividing the number by the total number of iterations in the simulation.
  • the i, j axis indicate the po- sition of each weight on the SOM lattice and the /?( , /) axis shows the probabil- ity of each of the weights. From this probability distribution a cluster structure is visible where local maxima in the surface correspond to weight vectors positioned at input data cluster centres and local minima form a boundary between the local maximum and hence the clusters.
  • the different clusters are not clearly distinct and the surface is not smooth. This roughness of the surface and variations in the peak values of the local maxima may lead to problems when trying to automatically determine the cluster centres using an algorithm which would associate a local maximum of the probability with a cluster centre. Defining a global threshold above which a weight vector would be con- sidered a local maximum and a second global threshold below which a weight vector would be considered a local minimum leads to problems. For example, if in one part of the lattice the probability at a local maximum is below the global threshold to be considered a local maximum hence a cluster centre could lead to erroneous choice of cluster centres. In practise, as in the exam- pie shown, this is likely to occur. An alternative would be to define local thresholds for determining a local maximum and a local minimum. However this would be a complicated process requiring an involved computation with no guarantee of a correct result .
  • a clustering algorithm is based on this observation.
  • the algorithm provides a means to smooth the probability surface just described.
  • the use of a neighbourhood function means that the smoothing operation is done locally, emphasising the local maximum as well as the local minimum.
  • the positive elements of the neighbourhood function emphasise the local maximum and the negative components emphasise the local minimum.
  • the result is that all the local maxima consistently reach high values and the local minima consistently reach low values.
  • This allows for the use of a global threshold to identify the maximum and minimum and thus facilitates the use of a computer in the process.
  • a measure somehow related is used in the proposed algorithm which is described as follows.
  • each weight For each weight define a scalar coefficient C,. This coefficient is bounded to the interval [0, 1], and its initial value before training may be quite small.
  • the SOM algorithm is carried out as described earlier. At each iteration the winner weight v(t) is determined as in equation [1] and each SOM weight i is updated according to equation [2]. At the same time each coefficient C, is updated as:
  • C,(t+1) C,(t) + C v(t) (t)h m (d L ) ⁇ [4]
  • h m is the second neighbourhood function.
  • d L and v(t) are the same terms that were used in the SOM algorithm, that is, d L is the distance on the lattice between node i and the winner node v(t).
  • is a small step value for adjusting convergence speed, ⁇ is somewhat analogous to the ⁇ in the SOM algorithm.
  • Figure 5 shows a preferred form 51 of the second neighbourhood function h m .
  • Figure 5 shows a time-dependent version of the second neighbourhood function h m where the function becomes more pronounced as the number of prior iterations increases. In other words, with increasing time, the function h m achieves negative values sooner (as the distance d L increases), and the negative values are much more negative than during the earlier iterations.
  • the preferred form 51 of the second neighbourhood function h m somewhat resembles the first neighbourhood function h used in the SOM algorithm. Like the first neighbourhood function h, the second neighbourhood function h m starts at 1 when the distance is zero. Also, h and h m both approach zero as the distance d L increases.
  • the second neighbourhood function h m is preferably negative.
  • h m may be negative for distances over 3.
  • the negative value of the second neighbourhood function h m can be seen as a form of lateral inhibition between the weights.
  • Lateral inhibition is a mathematical model that tries to approximate real biologi- cal phenomena. Similar to the h function used in the SOM, weights adjacent to the centre of activity have their coefficients and hence their activity increased, while the activity of weights further away from the centre of activity are inhibited.
  • Figure 6 shows an example of a computer pseudocode listing for generating the function shown in Figure 5.
  • the second neighbourhood function h m varied with time and initially did not have a high level of lateral inhibition. Towards the end of the simulation, the level of lateral inhibition was increased.
  • the surface shown in Figure 7 has six distinct and separated elevated regions. As a result of forcing the between 0 and 1 , each elevated region corresponds to a set of adjacent weights on the lattice whose coefficients C, have saturated at or near 1 and whose weights X t are found at the cluster centre. These regions are surrounded by regions where the coefficients C, have been driven to small values.
  • plot 70 is for visualization purposes only and is not required by computers. Instead, reference number 71 points to an array of current coefficients C,(t) and reference number 72 to an array of next coefficients C,(t+1). It is the array 72 of updated coefficients that a computer uses to determine the cluster centres and their locations.
  • An arrow 806 denotes updating of the coefficients that takes place in step 806 of Figure 8 that will be described next.
  • Figure 8 is a flow chart illustrating a method according to a preferred embodiment of the invention wherein the method comprises two iterative processes 81 and 82 run in tandem. The odd-numbered steps on the left-hand side of Figure 8 relate to the known SOM algorithm.
  • step 801 the SOM algorithm is initialized.
  • the initialization comprises selecting initial values for the ⁇ , a, h and randomly initializing the weight vectors X,. Since Figure 8 shows an online algorithm, the values of the inputs ⁇ (t) are not known at this stage.
  • Step 802 is a corresponding ini- tialization step for the second data structure.
  • Steps 803, 805, 807 and 809 form the conventional SOM iteration.
  • input ⁇ (t) is presented to the SOM at iteration t.
  • step 805 the winner weight v(t) is selected according to equation [1].
  • step 807 the weight vectors are updated according to equation [2].
  • step 807' the variables ⁇ that determine learning speed and/or the a of the first neighbourhood function h are updated.
  • step 809 the iterative loop is repeated until some predetermined stopping criteria are met. For instance, the loop may be set to run a predetermined number of times, or the loop may be interrupted when each succeeding iteration fails to produce a change that exceeds a given threshold.
  • Steps 806 and 808 relate to the second iterative process 82 for maintaining and updating the second data structure that is used to determine the cluster centres.
  • step 806 the coefficients C, are updated on the basis of the winner weights v(t) according to equation [4].
  • step 808 parameters for the second neighbourhood function h m are updated (see Figure 5).
  • steps 806 and 808 of the second iterative process 82 are interleaved with the steps of the first iterative process 81.
  • step 806 utilizes intermediate calculation results of step 805.
  • step 808, in which the second neighbourhood function h m is updated may utilize data from step 807 that updates the variable ⁇ and the first neighbourhood function.
  • Figures 9 and 10 show an SOM map and a coefficient map, respectively. In this example the number of clusters in the input data distribution was reduced by one and the same SOM algorithm was used.
  • Figure 9 shows the result of both the input and the final configuration of the weight vectors.
  • Figure 10 shows the resulting C, values. The five cluster centres are once again quite clear. It is remarkable that the same algorithm, without any adjustments, was capable of finding the new number of clusters and the locations of those clusters.
  • lattice which in this case is two-dimensional.
  • Each point of the lattice is given a label, e.g. (2,3), (0,15).
  • This lattice remains fixed and the labelling of the lattice points does not change.
  • the lattice structure is a 15x15 lattice. It is from the lattice that the distances dL used in all neighbourhood functions are determined. For instance, the distance between lattice points (1 ,3) and (7,8) could be 6, depending on the distance measure we use.
  • Each lattice point is associated with a weight vector.
  • the dimension of the weight vector is always the same as the dimension of the input data vector.
  • the input data has two dimensions. It is the weight vectors that change depending on the input data point and the distance be- tween two points on the lattice, the first point being the lattice point associated with the winner and the second point the lattice point associated with the weight vector to be updated. This distance is not used to update the weight vector directly, but to determine the value of the first neighbourhood function in the update of the weight vector.
  • Figures 1 and 7 show a two-dimensional plot of the input data points and the weight vectors from the SOM. The plot is in the input space and by chance all the input points were bounded by [-8, 8].
  • the data points are just the points shown.
  • the weight vectors are plotted at the crossing point of the lines. Another way of looking at this is to draw a line between weight vectors whose associated lattice nodes are the closest nodes on the lattice.
  • the fact that the plot appears as a lattice means that the weights are organized, that is, the weight vectors associated with adjacent lattice nodes appear in the input data space as adjacent to each other.
  • the relationship between the coefficients and the SOM lattice is the same as the relationship between the weight vectors and the SOM lattice, ex- cept that the dimension of the coefficients is always 1.
  • the relationship is somewhat similar, though not the same as a probability measure, where the probability would be that the weight vector associated with the lattice node with which the coefficient is associated will be chosen as the winner for any given data input.
  • Another interpretation is that the coefficients somehow represent an exaggerated version of the probability distribution of the input data.
  • the lattice is a fixed structure.
  • the weight vector is in the input data space.
  • the dimensionality of the input data and the lattice are not necessarily the same.
  • the input data may have any number of dimensions, such as 5, 10 or 15. In that case the dimensions of the weight vectors would also be 5, 10, or 15. But the lattice could still be two-dimensional. We could also choose a different lattice to begin with (i.e. change the SOM structure) and make it four-dimensional, for example. In this case, if we still choose to have 15 lattice nodes along each axis of the lattice then we would have 15x15x15 15 lattice nodes and associated with each lattice node, a 5, 10 or 15-dimensional weight vector.
  • Figure 11 shows a mixed clustering example consisting of two very different clusters, namely a first cluster 112 described by a normal distribution and a second cluster 114 described by a uniform distribution along a parabola.
  • the same SOM was used as in the previous two examples.
  • Figure 12 shows a map 120 of coefficients C tJ for this example. There are two distinct areas. There is a connected region 122 of high values around three edges of the lat- tice which correspond to the parabolic distribution cluster 112 in Figure 11, and the disc shape 124 of connected values which corresponds to the normal distribution 114.
  • This example is quite interesting because of the complexity of the parabolic distribution. In the mixture model clustering algorithm, without any prior information it would be difficult to generalize this distribution to a normal distribution.
  • a further preferred embodiment of the invention relates to automatic and unsupervised labelling of the clusters.
  • B ⁇ 1, 2, ... , K
  • K should be at least greater than or equal to the expected number of clusters.
  • K N, the total number of weights in the SOM, as this imposes a limit on the maxi- mum number of clusters which can be identified.
  • Each coefficient ⁇ t e [0, 1] represents a weighting between the SOM node i and the label /.
  • the updating algorithm used on these coefficients to achieve automatic labelling proceeds as follows. At time t SOM weight v(t) is chosen as the winner. The weight and its neighbours are updated as in the normal SOM algo- rithm. Also the coefficient C, is updated as well as the coefficients C, of the neighbours of C,. The updating of the coefficients C, and the interpretation of the results form the basis of the main invention, namely the automatic and unsupervised clustering. In the automatic and unsupervised labelling of the clusters, at the same time t the ⁇ t are updated as follows.
  • equation [4] uses C v at iteration t whereas equations [8] and [9] use C v at iteration t+1.
  • the C v coefficient changes so little between iterations that either value can be used, depending on which value is more conveniently available.
  • Figure 14 shows an example of the result of the combination of the
  • Figure 15 shows how the automatic cluster-labelling algorithm can be integrated with the cluster-determination algorithm according to the invention.
  • Figure 15 is a modification of Figure 8, with step 806 followed by step 806' that relates to the automatic cluster-labelling algorithm.
  • step 806 the cluster labels lv(t) are determined according to equation [7].
  • step 806' that relates to the automatic cluster-labelling algorithm.
  • step 806 the cluster labels lv(t) are determined according to equation [7].
  • the ⁇ components are updated according to equation [8] and the ⁇ j k components are updated according to equation [9].
  • step 806' By placing step 806' inside the second iterative process 82, maximal synergy benefits are obtained, or in other words, the computational overhead is kept to a minimum because step 806' makes used of the winner selection and coefficient determination already performed for the SOM construction and the automatic cluster determination.
  • the technique according to the invention allows automatic determination of cluster centres with a minimal amount of information on the data. No explicit, initial estimate of the number of clusters is required. Given the nature of convergence of the SOM, there is no need to know the type of distribution of the clusters either. In this respect the algorithm is very general. However, although explicit initial estimates of the number of clusters are not required, care should be taken to ensure that the SOM lattice contains a number of nodes larger than the expected number of clusters, as well as choosing a non- monotonous neighbourhood function that is negative for large distances and provides a level of lateral inhibition to ensure that the coefficients for the cluster regions stand out more clearly.
  • the preferred embodiment of the invention in which the second it- erative process is interleaved with the conventional iterative process, requires little computational overhead.
  • this embodiment of the invention is especially suitable for on-line application where human supervision is not available.
  • Initial simulations on artificial data show that the inventive technique is simple and apparently robust and is more easily generalized than most current clus- tering algorithms.
  • the technique according to the invention can be considered somewhat as a hybrid of the K-means and probabilistic-model-based clustering.

Abstract

A computer-implemented method for determining cluster centres in a first data structure, wherein the first data structure comprises a lattice structure of weight vectors that create an approximate representation of a plurality of input data points. The method comprises performing a first iterative process (81) for iteratively updating the weight vectors such that they move toward cluster centres; performing a second iterative process (82) for iteratively updating a second data structure utilizing results of the iterative updating of the first data structure; and determining the weight vectors that correspond to cluster centres of the input data points on the basis of the second data structure.

Description

MECHANISM FOR UNSUPERVISED CLUSTERING
BACKGROUND OF THE INVENTION
The invention relates to clustering techniques that are generally used to classify input data into groups or clusters without prior knowledge of those clusters. More particularly, the invention relates to methods and apparatus for automatically determining cluster centres. An example of such clustering techniques is a Self-Organizing Map, originally invented by Teuvo Koho- nen. The SOM concept is well documented, and a representative example of an SOM application is disclosed in US patent 6260036. The current framework under investigation for describing and analyzing a context has a critical component based on the clustering of data. This clustering is expected to appear at every stage of context computation, from the processing of raw input signals to the determination of a higher order context. Clustering has been well studied over many years and many different ap- proaches to the problem exist. One of the main problems is knowing how many clusters exist in the data. Techniques exist to estimate the number of clusters in a data set, however the methods either require some form of a priori information or assumptions on the data, or they estimate the number of clusters on the basis of an analysis of the data, which may require storing the data, and be computationally demanding. None of these approaches seems entirely suitable for an on-line, unsupervised cluster analysis in a system with limited resources, as would be the case for a context-aware mobile terminal.
Clustering is an important part of any data analysis or information processing problem. The idea is to divide a data set into meaningful subsets so that points in any subset are closely related to each other and not to points in other subsets. The definition of 'related' may be as simple as the distance between the points. Many different approaches and techniques can be applied to achieve this goal. Each approach has its own assumptions and advantages and disadvantages. One of the best-known methods from the partition-based clustering class is the K-means algorithm, which tries to adaptively position K 'centres' that minimize the distance between the input data vectors and the centres. One of its disadvantages is that the number of the K centres must be specified before the clustering is attempted. In the case of an unknown data set this may not always be possible. The algorithm can be run several times with different values of K and the optimum K is chosen on the basis of some criteria. For an on-line system where the data is not stored, this approach is slow and impractical.
Thus a problem associated with the known clustering techniques is that while it is relatively easy for humans to determine the cluster centres, such a determination is difficult for computers.
BRIEF DESCRIPTION OF THE INVENTION
An object of the invention is to provide a method and an apparatus for implementing the method so as to alleviate the above disadvantages. In other words, the object of the invention is to provide a method for automatically determining cluster centres, such that the method is easily implemented in a computer system.
The object of the invention is achieved by a method and an arrangement which are characterized by what is stated in the independent claims. The preferred embodiments of the invention are disclosed in the de- pendent claims.
A computer-implemented method according to the invention can be implemented by the following steps: initializing a first data structure that comprises a lattice structure of weight vectors that create an approximate representation of a plurality of input data points; performing a first iterative process for iteratively updating the weight vectors such that they move toward cluster centres; performing a second iterative process for iteratively updating a second data structure utilizing the results of the iterative updating of the first data structure; and determining, on the basis of the second data structure, the weight vectors that correspond to the cluster centres of the input data points.
A preferred embodiment of the invention is based on the following idea. Self-organizing maps generally use a lattice structure of nodes, and a weight vector is associated with each node. Each data point in the input data is iteratively compared with each weight vector of the lattice, and the node whose weight vector best approximates the data point is chosen as the winner for that data point and iteration. Then the weight vectors associated with each node of the lattice are adjusted. The adjustment made to each node's weight vector is dependent on the winning node through a neighbourhood function. Following the adjustment of the weight vectors a next iteration step is taken.
As used in this context, the term 'neighbourhood function' is a function of distance on the lattice between the winning node and the node being updated such that the value of the function generally decreases as the dis- tance increases. With normalized SOMs, the value of the function is one for a distance of zero. A common form for the neighbourhood function is Gaussian, but preferred embodiments of the invention make use of neighbourhood functions that are not strictly Gaussian.
In addition to the primary iteration process for updating the SOM, or other clustering mechanism, a second iterative process is run, and the second iterative process gives a numerical value for the lattice nodes such that the numerical value increases if the node's weight vector is positioned at a cluster centre. Then the cluster centres are determined, not on the basis of the weight vectors, but on the basis of the numerical values produced by the second itera- tive process.
Thus the problem of locating cluster centres reduces to a relatively straightforward problem of locating local maxima in the numerical values produced by the second iterative process.
An advantage of the invention is that it is much easier for machines to locate local maxima in the numerical values than to locate cluster centres in the clustering mechanism wherein the cluster centres are the location in which the density of the weight vectors is highest.
In a preferred embodiment of the invention, the second data structure comprises a coefficient for each of the weight vectors in the lattice struc- ture. Each iteration in the first iterative process comprises selecting a winner weight vector for each of the data points on the basis of a distance measure between the input data point and the weight vector. Each iteration in the second iterative process comprises calculating a next value of each coefficient on the basis of the current value of the coefficient; and a combination of: 1 ) the current coefficient of the winner weight vector, 2) a second neighbourhood function that approaches zero as the distance on the lattice structure between the weight vector and the winner weight vector increases, and 3) an adjustment factor for adjusting convergence speed between iterations.
The combination referred to above can be a simple multiplication. If the second neighbourhood function is selected appropriately, such that the second data structure has distinct borders, the step of determining the weight vectors can be accomplished simply by selecting local maxima in the second data structure.
A preferred version of the second neighbourhood function is not monotonous, but gives negative values at some distances. Also, the second neighbourhood function is preferably made more pronounced over time as the number of prior iterations increases.
Preferably, the first data structure is or comprises a self-organizing map and the input data points represent real-world quantities.
BRIEF DESCRIPTION OF THE DRAWINGS In the following the invention will be described in greater detail by means of preferred embodiments with reference to the attached drawings, in which
Figure 1 illustrates a self-organizing map (SOM) with six clusters of input data points; Figure 2 shows a typical form of a neighbourhood function used in an SOM algorithm;
Figure 3 shows a 15 by 15 lattice structure resulting from uniformly distributed input data;
Figure 4 shows a probabilistic map for visualizing the cluster centres in an SOM with six cluster centres;
Figure 5 shows a preferred form of the second neighbourhood function used in the second iterative process according to a preferred embodiment of the invention;
Figure 6 shows a computer pseudocode listing for generating the function shown in Figure 5.
Figure 7 shows a coefficient map that visualizes the data structure used for locating the cluster centres in the SOM; and
Figure 8 is a flow chart illustrating a method according to the invention wherein the method comprises two iterative processes run in tandem; Figures 9 and 10 show an SOM map and a coefficient map, respectively, with five clusters;
Figures 11 and 12 show an SOM and a coefficient map, respectively, for an exceptional distribution of input data;
Figure 13 shows an example of a neighbourhood function used in an automatic cluster-labelling algorithm; Figure 14 shows the result of the automatic cluster-labelling algorithm; and
Figure 15 shows how the automatic cluster-labelling algorithm can be integrated with a cluster-determination algorithm according to the invention.
DETAILED DESCRIPTION OF THE INVENTION
A practical example of the invention is disclosed in the context of self-organizing maps. An SOM is a learning algorithm or mechanism from the area of Artificial Neural Networks (ANNs) that find wide application in the area of vector quantization, unsupervised clustering and supervised classification. Reasons for its widespread use include its robustness, even for data sets of very different and even unknown origin, as well as the simplicity of its implementation. The SOM uses a set of weight vectors to form a quantized, topology-preserving mapping, of the input space. The distribution of the weights reflects the probability distribution of the input. The SOM representation is used in clustering applications generally by clustering the weight vectors after training, using for example the K-means algorithm. However the problem of the original K-means algorithm still remains, that is, determining the value of K for the number of centres. In the following, a method based on the SOM algorithm is described which can be used to automatically determine cluster centres in an unsupervised manner. In other words, the number of clusters does not have to be predefined and groups of adjacent SOM weight vectors represent the cluster centres. Unlike the K-means algorithm where each cluster is represented by one centre, in the inventive algorithm the cluster is represented by a set of centres which correspond to weight vectors in the SOM. The algorithm requires few additional computational resources and makes direct use of the information generated during the learning of the SOM. It is already clear why the algorithm can be considered a hybrid of the K-means algorithm and a method based on a probabilistic mixture model. Each cluster is represented by a set of centres, which correspond to a set of weights in the SOM. The SOM weights, in turn, form an approximation of the probability distribution of the input.
From the ANN point of view this may be interesting, as the algorithm uses lateral inhibition between the weight vectors to generate the clusters and a form of Hebbian learning. It is clear that the performance of the clustering depends heavily on the topology-preserving and converging ability of the SOM. The SOM algorithm
Figure 1 shows a self-organizing map 10. More particularly, Figure 1 shows an online version of SOM. There also exists a "batch SOM" but this requires storing all the input points and going through all of them several times, which is why the online-version is preferred here. Reference numerals 11 generally denote input data points that in most SOM applications represent real- world events or quantities. The SOM algorithm creates an SOM or lattice structure 12 by means of an iterative process that can be summarized as follows. Consider a time sequence of inputs ω(t), t = 1 , ..., with ω e Rm and a probabil- ity distribution pω. The SOM itself consists of a total of N weight vectors X Rm distributed on an n-dimensional lattice. Thus there are two associated dimensions, a dimension m of the input data and the weight vectors, and a dimension n of the lattice. The reason for having a lattice is to be able to define neighbourhood relations between adjacent weights. For example, if each weight k has an associated position vector ik e f on the lattice, then a distance d (i ik) between weights k and / on the lattice can be defined. The initial values of the weight vectors can be randomly chosen, as the convergence of the algorithm is independent of the initial conditions. At each iteration, a distance d(ω(t), Xk) between an input ω(t) at time t and each of the weight vectors Xk is calculated. A winner weight v(t) at time t is then defined as: v(t) = arg min d(ω(t), Xk) [1 ]
1 ≤ ifc ≤ Ν
For real-valued data, a possible distance measure d(-,-) could be the Euclidean distance. When the winner has been found, each weight vector is updated as:
Xk(t+1) = Xk(t) + α(t)h(£, v(t))(ω(t) - Xk(t)) [2] where α(t) e [0,1] is a decreasing function of time which determines the learning rate of the SOM and h is a neighbourhood function which is a function of the distance on the lattice between the winner weight and the up- dated weight, that is: h(k, v(t)) = h(dL(iv(l)ι ik)).
Figure 2 shows a typical form 21 of a neighbourhood function h used in the SOM algorithm. The neighbourhood function h is positive, having a maximum value of 1 for dL = 0 and decreasing monotonically towards 0 as dL increases. In practical situations h is often Gaussian, i.e. of the form: h(d = e-ad' [3] ... for some appropriately chosen scaling factor a which may be a function of time. The SOM map is built iteratively. In other words, the steps in equations [1] and [2] are repeated for each t. For uniformly distributed input data and a two-dimensional lattice with 15 x 15 weights, the SOM algorithm gives the result shown in Figure 3.
Figure 3 shows a plot of the weight values on a support [0,1]χ[0,1] of the input, where adjacent weights on the lattice are connected by a line segment in the plot. None of the line segments intersect, which indicates that the weights are organized. The weight values are spread uniformly over the input space and approximate the uniform distribution of the input data. To arrive at such a result the learning is divided into two phases. In the first phase, there is a large α and a, and the algorithm is in the self-organizing phase where the weights become organized. The second phase is the convergence phase where the neighbourhood width is reduced to zero as a increases and approaches zero. The weight vectors converge to the approximation of the probability distribution of the input.
Unsupervised Clustering Based on the SOM
After the description of the basic SOM algorithm, some techniques of determining cluster centres will be disclosed. Consider the SOM shown in Figure 1. Figure 1 shows data points generated from six different normal distributions with different means and variances, and plotted on this there are the weight vectors from the SOM when data from this distribution was used as an input. It is seen that the weight vectors are organized and are more concentrated at the centres of the clusters. Hence the approximation of the probability distribution of the input. Reference numerals 13ι to 13β denote six circles located at the cluster centres. The circles are not normally part of the SOM. While it is easy for humans to determine where the cluster centres are, this task is surprisingly difficult for computers.
First, a probabilistic algorithm for determining the cluster centres will be briefly disclosed. Figure 4 shows a plot of the calculated probability of each weight vector being chosen as a winner during the simulation. The probability for each weight vector is determined by keeping a record of the number of times the weight vector was chosen as a winner and then dividing the number by the total number of iterations in the simulation. The i, j axis indicate the po- sition of each weight on the SOM lattice and the /?( , /) axis shows the probabil- ity of each of the weights. From this probability distribution a cluster structure is visible where local maxima in the surface correspond to weight vectors positioned at input data cluster centres and local minima form a boundary between the local maximum and hence the clusters. However, the different clusters are not clearly distinct and the surface is not smooth. This roughness of the surface and variations in the peak values of the local maxima may lead to problems when trying to automatically determine the cluster centres using an algorithm which would associate a local maximum of the probability with a cluster centre. Defining a global threshold above which a weight vector would be con- sidered a local maximum and a second global threshold below which a weight vector would be considered a local minimum leads to problems. For example, if in one part of the lattice the probability at a local maximum is below the global threshold to be considered a local maximum hence a cluster centre could lead to erroneous choice of cluster centres. In practise, as in the exam- pie shown, this is likely to occur. An alternative would be to define local thresholds for determining a local maximum and a local minimum. However this would be a complicated process requiring an involved computation with no guarantee of a correct result .
A clustering algorithm according to a preferred embodiment of the invention is based on this observation. In effect the algorithm provides a means to smooth the probability surface just described. The use of a neighbourhood function means that the smoothing operation is done locally, emphasising the local maximum as well as the local minimum. The positive elements of the neighbourhood function emphasise the local maximum and the negative components emphasise the local minimum. The result is that all the local maxima consistently reach high values and the local minima consistently reach low values. This allows for the use of a global threshold to identify the maximum and minimum and thus facilitates the use of a computer in the process. Hence instead of using directly the probability of a weight being the win- ner, a measure somehow related is used in the proposed algorithm which is described as follows.
For each weight define a scalar coefficient C,. This coefficient is bounded to the interval [0, 1], and its initial value before training may be quite small. The SOM algorithm is carried out as described earlier. At each iteration the winner weight v(t) is determined as in equation [1] and each SOM weight i is updated according to equation [2]. At the same time each coefficient C, is updated as:
C,(t+1) = C,(t) + Cv(t)(t)hm(dL)δ [4] where hm is the second neighbourhood function. dL and v(t) are the same terms that were used in the SOM algorithm, that is, dL is the distance on the lattice between node i and the winner node v(t). δ is a small step value for adjusting convergence speed, δ is somewhat analogous to the α in the SOM algorithm.
Following the update C,(t+1) is then forced within the interval [0, 1]. For example, if C,(t+1) > 1, it can be set to 1 , and if (t+1) < 0, it can be set to 0.01. Since the update of C,(t) depends on the value of Cv(t), learning is clearly Hebbian.
Figure 5 shows a preferred form 51 of the second neighbourhood function hm. Actually, Figure 5 shows a time-dependent version of the second neighbourhood function hm where the function becomes more pronounced as the number of prior iterations increases. In other words, with increasing time, the function hm achieves negative values sooner (as the distance dL increases), and the negative values are much more negative than during the earlier iterations. As can be seen, the preferred form 51 of the second neighbourhood function hm somewhat resembles the first neighbourhood function h used in the SOM algorithm. Like the first neighbourhood function h, the second neighbourhood function hm starts at 1 when the distance is zero. Also, h and hm both approach zero as the distance dL increases. However, for some distances, the second neighbourhood function hm is preferably negative. For instance, in a 10 by 10 lattice, hm may be negative for distances over 3. The negative value of the second neighbourhood function hm can be seen as a form of lateral inhibition between the weights. Lateral inhibition is a mathematical model that tries to approximate real biologi- cal phenomena. Similar to the h function used in the SOM, weights adjacent to the centre of activity have their coefficients and hence their activity increased, while the activity of weights further away from the centre of activity are inhibited.
This lateral inhibition is rarely if ever used in practical applications, however. In the SOM, the interaction between the weight vectors is defined by the neighbourhood function h defined by equation [3], which is strictly positive. If h was allowed to be negative at any point, divergence of the weight vectors could result, instead of convergence. In the clustering method proposed here this lateral inhibition is used to determine the cluster centres. Intuitively it can be seen that if weight i is quite often the winner then will increase along with its neighbours. Similarly, when the winner is i, for its closest neighbours / at a small distance from on the lattice such that that hm is positive, the C, will also increase. At the same time, for at a large distance from i on the lattice, where hm is negative, the C, will decrease. Similarly, if is not often the winner, its C, will not increase very much and will be decreased by other winners located at a distance on the lattice. Given the example in Figure 1 , it is clear that weights in the inter-cluster regions will have a lower probability of being winners, whereas weights close to the cluster centres will have a higher probability of being winners. Hence it would be expected that the coef- ficients C, would be higher for weights positioned in or near the cluster centres and small for positions between the clusters. This would then provide boundaries between the cluster centres. Of course this depends on the fact that the weight vectors X, reach an organized configuration.
Figure 6 shows an example of a computer pseudocode listing for generating the function shown in Figure 5.
Figure 7 shows a plot 70 of the coefficient values C(i, j) for the weights i, j in the SOM of Figure 1 with δ = 0.01 after 20000 iterations. The second neighbourhood function hm varied with time and initially did not have a high level of lateral inhibition. Towards the end of the simulation, the level of lateral inhibition was increased. The surface shown in Figure 7 has six distinct and separated elevated regions. As a result of forcing the between 0 and 1 , each elevated region corresponds to a set of adjacent weights on the lattice whose coefficients C, have saturated at or near 1 and whose weights Xt are found at the cluster centre. These regions are surrounded by regions where the coefficients C, have been driven to small values. Hence it is possible to determine which weights represent the same cluster. To determine which cluster an input vector belongs to, simply find the closest centre and assign the input to the cluster to which the centre belongs. Thus unlike the K-means algorithm where one weight represents the cluster, in the algorithm proposed here a group of weights represents the cluster. The means of classification is the same. Because the coefficients C, are saturated near 0 or 1 , it is a simple task to determine the cluster centres using a global threshold, the value of which could be set for example at 0.5.
It should be noted that the plot 70 is for visualization purposes only and is not required by computers. Instead, reference number 71 points to an array of current coefficients C,(t) and reference number 72 to an array of next coefficients C,(t+1). It is the array 72 of updated coefficients that a computer uses to determine the cluster centres and their locations. An arrow 806 denotes updating of the coefficients that takes place in step 806 of Figure 8 that will be described next. Figure 8 is a flow chart illustrating a method according to a preferred embodiment of the invention wherein the method comprises two iterative processes 81 and 82 run in tandem. The odd-numbered steps on the left-hand side of Figure 8 relate to the known SOM algorithm. The even-numbered steps on the right-hand side of Figure 8 relate to the inventive algorithm for maintaining and updating the second data structure that is used to determine the cluster centres automatically. In step 801 the SOM algorithm is initialized. The initialization comprises selecting initial values for the α, a, h and randomly initializing the weight vectors X,. Since Figure 8 shows an online algorithm, the values of the inputs ω(t) are not known at this stage. Step 802 is a corresponding ini- tialization step for the second data structure. Steps 803, 805, 807 and 809 form the conventional SOM iteration. In step 803, input ω(t) is presented to the SOM at iteration t. In step 805, the winner weight v(t) is selected according to equation [1]. In step 807, the weight vectors are updated according to equation [2]. In an optional step 807', the variables α that determine learning speed and/or the a of the first neighbourhood function h are updated. In step 809, the iterative loop is repeated until some predetermined stopping criteria are met. For instance, the loop may be set to run a predetermined number of times, or the loop may be interrupted when each succeeding iteration fails to produce a change that exceeds a given threshold. Steps 806 and 808 relate to the second iterative process 82 for maintaining and updating the second data structure that is used to determine the cluster centres. In step 806, the coefficients C, are updated on the basis of the winner weights v(t) according to equation [4]. In an optional step 808, parameters for the second neighbourhood function hm are updated (see Figure 5). According to a preferred feature of the invention, steps 806 and 808 of the second iterative process 82 are interleaved with the steps of the first iterative process 81. In this way, step 806 utilizes intermediate calculation results of step 805. Similarly, step 808, in which the second neighbourhood function hm is updated, may utilize data from step 807 that updates the variable α and the first neighbourhood function. Figures 9 and 10 show an SOM map and a coefficient map, respectively. In this example the number of clusters in the input data distribution was reduced by one and the same SOM algorithm was used. Figure 9 shows the result of both the input and the final configuration of the weight vectors. Figure 10 shows the resulting C, values. The five cluster centres are once again quite clear. It is remarkable that the same algorithm, without any adjustments, was capable of finding the new number of clusters and the locations of those clusters.
The way the invention works is as follows. In the beginning there is a predefined lattice, which in this case is two-dimensional. Each point of the lattice is given a label, e.g. (2,3), (0,15). This lattice remains fixed and the labelling of the lattice points does not change. In the above examples, the lattice structure is a 15x15 lattice. It is from the lattice that the distances dL used in all neighbourhood functions are determined. For instance, the distance between lattice points (1 ,3) and (7,8) could be 6, depending on the distance measure we use.
Each lattice point is associated with a weight vector. The dimension of the weight vector is always the same as the dimension of the input data vector. In the examples here the input data has two dimensions. It is the weight vectors that change depending on the input data point and the distance be- tween two points on the lattice, the first point being the lattice point associated with the winner and the second point the lattice point associated with the weight vector to be updated. This distance is not used to update the weight vector directly, but to determine the value of the first neighbourhood function in the update of the weight vector. Figures 1 and 7 show a two-dimensional plot of the input data points and the weight vectors from the SOM. The plot is in the input space and by chance all the input points were bounded by [-8, 8]. The data points are just the points shown. The weight vectors are plotted at the crossing point of the lines. Another way of looking at this is to draw a line between weight vectors whose associated lattice nodes are the closest nodes on the lattice. The fact that the plot appears as a lattice means that the weights are organized, that is, the weight vectors associated with adjacent lattice nodes appear in the input data space as adjacent to each other.
The relationship between the coefficients and the SOM lattice is the same as the relationship between the weight vectors and the SOM lattice, ex- cept that the dimension of the coefficients is always 1. The relationship is somewhat similar, though not the same as a probability measure, where the probability would be that the weight vector associated with the lattice node with which the coefficient is associated will be chosen as the winner for any given data input. Another interpretation is that the coefficients somehow represent an exaggerated version of the probability distribution of the input data.
In conclusion, we might say that the lattice is a fixed structure. There is one weight vector associated with each lattice node. The weight vector is in the input data space. Similarly, there is one coefficient associated with each lattice node. It is a scalar value and represents an indication of probabil- ity, though not the real probability that the weight vector associated with the same lattice node will be chosen as winner for a given input data point.
The dimensionality of the input data and the lattice are not necessarily the same. The input data may have any number of dimensions, such as 5, 10 or 15. In that case the dimensions of the weight vectors would also be 5, 10, or 15. But the lattice could still be two-dimensional. We could also choose a different lattice to begin with (i.e. change the SOM structure) and make it four-dimensional, for example. In this case, if we still choose to have 15 lattice nodes along each axis of the lattice then we would have 15x15x15 15 lattice nodes and associated with each lattice node, a 5, 10 or 15-dimensional weight vector. The examples above use a two-dimensional lattice and a two- dimensional input space merely because it is easier to draw and visualize. In practical implementations one could expect that the input data has more dimensions but the lattice structure could be two-dimensional. The number of lattice nodes along each dimension of the lattice is variable depending on the amount of computational resources available.
Figure 11 shows a mixed clustering example consisting of two very different clusters, namely a first cluster 112 described by a normal distribution and a second cluster 114 described by a uniform distribution along a parabola. The same SOM was used as in the previous two examples. Figure 12 shows a map 120 of coefficients CtJ for this example. There are two distinct areas. There is a connected region 122 of high values around three edges of the lat- tice which correspond to the parabolic distribution cluster 112 in Figure 11, and the disc shape 124 of connected values which corresponds to the normal distribution 114. This example is quite interesting because of the complexity of the parabolic distribution. In the mixture model clustering algorithm, without any prior information it would be difficult to generalize this distribution to a normal distribution. Similarly, a K-means approximation would have difficulty in resolving these two clusters as at some points the distance between two points in different clusters is smaller than the distance between two points in the same cluster. In experiments on this example the best K-means came up with four clusters. This example indicates that the clustering algorithm according to the invention may be very general.
Automatic labelling of cluster centres
A further preferred embodiment of the invention relates to automatic and unsupervised labelling of the clusters. The same notation is used here as above and only notation pertinent to this embodiment will be explained. Consider a set of labels B = {1, 2, ... , K), which will be used to label the clusters. In practise K should be at least greater than or equal to the expected number of clusters. In the case of no prior knowledge, it may be suitable to let K = N, the total number of weights in the SOM, as this imposes a limit on the maxi- mum number of clusters which can be identified.
For each weight i in the SOM, define a vector of coefficients Θ, as:
®i = (θ , θl,2, ... θlιK) [5]
Each coefficient θt e [0, 1] represents a weighting between the SOM node i and the label /. The weight i belongs to cluster / if: / = arg max θl k [6]
1 ≤k ≤K
The updating algorithm used on these coefficients to achieve automatic labelling proceeds as follows. At time t SOM weight v(t) is chosen as the winner. The weight and its neighbours are updated as in the normal SOM algo- rithm. Also the coefficient C, is updated as well as the coefficients C, of the neighbours of C,. The updating of the coefficients C, and the interpretation of the results form the basis of the main invention, namely the automatic and unsupervised clustering. In the automatic and unsupervised labelling of the clusters, at the same time t the Θt are updated as follows. Define tV(t) as the label of the cluster to which the winner weight v(t) is assigned, thus from equation (6), lv(t) = arg max θv(ιχk(t) [7] l ≤k ≤K
For all the weights j, j = 1, ... , Nthe components θφ are then updated as follows:
ΘJM) 0+ 1) = θj.Wt) W+ Cv(t)(t+1) hB(dL)δ [8] where once again hB \s a neighbourhood function and preferably has the form shown in Figure 13. The idea is that the neighbours of the winning weight will have their θφ increased to ensure that they will be classified to the same cluster as the winner v(t), whereas weights further away from the winner weight will have their θφ(lj decreased to ensure that they will be classified to a different cluster than the winner. For the weights where the neighbourhood function hB(dL) > 0 it is also advantageous to decrease the other coefficients θφ(tj, k = 1, ... K, k ≠ lv(t) in Θj as follows: θJιk(t+l) = θJιk(ή - Cv(t)(t+l)hB(dL)δ [9]
This reinforces the labelling of the winner and its neighbours to the cluster label /v(t).
Note that equation [4] uses Cv at iteration t whereas equations [8] and [9] use Cv at iteration t+1. Actually, the Cv coefficient changes so little between iterations that either value can be used, depending on which value is more conveniently available. Figure 14 shows an example of the result of the combination of the
SOM, automatic clustering and automatic labelling of the clusters in the case of an input distribution consisting of five normal distributions. In this case the set of labels was given by L = {1, 2, 3, 4, 5, 6, 7, 8} . The values of the θtJ were randomly initialized. Figure 14 shows an SOM with five distinct regions, one region around each cluster. Each node in the five areas 1 to 5 is assigned a label such that nodes in area 1 have a label of 2, nodes in area 2 have a label of 4, etc. The inter-region areas have a label of 0. These weights had a maximum of Θ, lower than a threshold value of 0.2 and therefore they are not assigned as centres to any cluster. It is clear that the cluster centres have been properly labelled with the labels {2, 4, 5, 6, 8} . Figure 15 shows how the automatic cluster-labelling algorithm can be integrated with the cluster-determination algorithm according to the invention. Figure 15 is a modification of Figure 8, with step 806 followed by step 806' that relates to the automatic cluster-labelling algorithm. In step 806, the cluster labels lv(t) are determined according to equation [7]. Then the θφ components are updated according to equation [8] and the θj k components are updated according to equation [9]. By placing step 806' inside the second iterative process 82, maximal synergy benefits are obtained, or in other words, the computational overhead is kept to a minimum because step 806' makes used of the winner selection and coefficient determination already performed for the SOM construction and the automatic cluster determination.
Summary
The technique according to the invention allows automatic determination of cluster centres with a minimal amount of information on the data. No explicit, initial estimate of the number of clusters is required. Given the nature of convergence of the SOM, there is no need to know the type of distribution of the clusters either. In this respect the algorithm is very general. However, although explicit initial estimates of the number of clusters are not required, care should be taken to ensure that the SOM lattice contains a number of nodes larger than the expected number of clusters, as well as choosing a non- monotonous neighbourhood function that is negative for large distances and provides a level of lateral inhibition to ensure that the coefficients for the cluster regions stand out more clearly.
The preferred embodiment of the invention, in which the second it- erative process is interleaved with the conventional iterative process, requires little computational overhead. Thus this embodiment of the invention is especially suitable for on-line application where human supervision is not available. Initial simulations on artificial data show that the inventive technique is simple and apparently robust and is more easily generalized than most current clus- tering algorithms. The technique according to the invention can be considered somewhat as a hybrid of the K-means and probabilistic-model-based clustering.
It is readily apparent to a person skilled in the art that, as the technology advances, the inventive concept can be implemented in various ways. The invention and its embodiments are not limited to the examples described above but may vary within the scope of the claims.

Claims

1. A computer-implemented method for determining cluster centres (13ι - 13β) in a first data structure (10), wherein the first data structure comprises a lattice structure (12) of weight vectors that create an approximate rep- resentation of a plurality of input data points (11 ); the method comprising: performing a first iterative process (81 ) for iteratively updating the weight vectors such that they move toward cluster centres (13ι - 13β); performing a second iterative process (82) for iteratively updating a second data structure (70 - 72) utilizing results of the iterative updating of the first data structure; and determining, on the basis of the second data structure (70 - 72), the weight vectors that correspond to cluster centres of the input data points.
2. A method according to claim 1 , wherein each iteration in the first iterative process (81 ) comprises: selecting a winner weight vector (v) for each data point on the basis of the distance between the data point and the weight vectors; and calculating a next value for each weight vector on the basis of the current value of the weight vector and a first neighbourhood function (21 , h) of the distance on the lattice structure between the weight vector and the winner weight vector; and the second data structure (70 - 72) comprises a first coefficient (Ci) for each of the weight vectors in the lattice structure and each iteration in the second iterative process (82) comprises calculating (806) a next value of each first coefficient (Ci) on the basis of: the current value of the first coefficient; and a combination of: a first coefficient of the winner weight vector (v), a second neighbourhood function (51 , hm) of the distance on the lattice structure between the weight vector and the winner weight vector, and an adjustment factor (δ) for adjusting convergence speed between iterations.
3. A method according to claim 1 or 2, wherein the step of determining the weight vectors that correspond to cluster centres comprises selecting local maxima in the second data structure (70 - 72).
4. A method according to claim 2 or 3, wherein the combination is or comprises multiplication.
5. A method according to any one of claims 2 to 4, wherein the second neighbourhood function (51 , hm) is not monotonous.
6. A method according to any one of claims 2 to 5, wherein the first coefficients are limited to the range [0,1] and the second neighbourhood function (51 , hm) gives negative or positive values, respectively, for some distances.
7. A method according to any one of claims 2 to 6, wherein the sec- ond neighbourhood function (51 , hm) depends on the number of prior iterations.
8. A method according to any one of the preceding claims, wherein the input data points (11 ) represent real-world quantities.
9. A method according to any one of claims 2 to 8, wherein the first data structure (10) is or comprises a self-organizing map.
10. A method according to claim 9, further comprising: estimating an upper limit K for the number of clusters in the self- organizing map; defining a coefficient vector Θi = (θl θl 2, ■ ■• θl K ) for each weight vector in the self-organizing map, the coefficient vector comprising K second coefficients θ, , each of which represents a weighting between the weight vector and a label /; and assigning cluster label / to weight vector i if:
/ = arg max θl k ■ 1 ≤k≤K
11. A method according to claim 10, wherein each iteration in the second iterative process (82) comprises calculating (806') a next value of each second coefficient on the basis of the current value of the second coefficient and a combination of: a coefficient of the winner weight vector, a third neighbourhood function (131 , hB) of distance, and an adjustment factor (δ) for adjusting convergence speed between iterations.
12. A computer-readable program product comprising a computer program code, wherein executing the computer program code in a computer causes the computer to carry out the steps of the method according to claim 1.
PCT/FI2003/000152 2002-03-04 2003-03-03 Mechanism for unsupervised clustering WO2003075221A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2003208735A AU2003208735A1 (en) 2002-03-04 2003-03-03 Mechanism for unsupervised clustering
US10/506,634 US7809726B2 (en) 2002-03-04 2003-03-03 Mechanism for unsupervised clustering

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FI20020414 2002-03-04
FI20020414A FI20020414A (en) 2002-03-04 2002-03-04 Mechanism for uncontrolled clustering

Publications (1)

Publication Number Publication Date
WO2003075221A1 true WO2003075221A1 (en) 2003-09-12

Family

ID=8563414

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2003/000152 WO2003075221A1 (en) 2002-03-04 2003-03-03 Mechanism for unsupervised clustering

Country Status (4)

Country Link
US (1) US7809726B2 (en)
AU (1) AU2003208735A1 (en)
FI (1) FI20020414A (en)
WO (1) WO2003075221A1 (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009009192A2 (en) * 2007-04-18 2009-01-15 Aumni Data, Inc. Adaptive archive data management
US8140584B2 (en) * 2007-12-10 2012-03-20 Aloke Guha Adaptive data classification for data mining
US8266148B2 (en) * 2008-10-07 2012-09-11 Aumni Data, Inc. Method and system for business intelligence analytics on unstructured data
US8161028B2 (en) * 2008-12-05 2012-04-17 International Business Machines Corporation System and method for adaptive categorization for use with dynamic taxonomies
GB2481239B (en) * 2010-06-17 2014-04-02 Canon Kk Method and device for enhancing a digital image
BR112013005412A2 (en) * 2010-09-17 2016-06-07 Nokia Technologies Oy apparatus method, computer readable storage medium that has one or more sequences of one or more instructions, computer program product
US20130166337A1 (en) * 2011-12-26 2013-06-27 John MacGregor Analyzing visual representation of data
CN107093122B (en) * 2016-12-02 2021-01-19 北京星选科技有限公司 Object classification method and device
US11397558B2 (en) 2017-05-18 2022-07-26 Peloton Interactive, Inc. Optimizing display engagement in action automation
EP3657340A1 (en) * 2017-05-23 2020-05-27 Shanghai Cambricon Information Technology Co., Ltd Processing method and accelerating device
US10963495B2 (en) 2017-12-29 2021-03-30 Aiqudo, Inc. Automated discourse phrase discovery for generating an improved language model of a digital assistant
US10929613B2 (en) 2017-12-29 2021-02-23 Aiqudo, Inc. Automated document cluster merging for topic-based digital assistant interpretation
US10963499B2 (en) * 2017-12-29 2021-03-30 Aiqudo, Inc. Generating command-specific language model discourses for digital assistant interpretation
CN110988787B (en) * 2019-12-20 2023-11-17 上海创远仪器技术股份有限公司 Method for realizing optimal direction finding angle measurement based on cluster analysis in wireless signal direction finding monitoring

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6226408B1 (en) * 1999-01-29 2001-05-01 Hnc Software, Inc. Unsupervised identification of nonlinear data cluster in multidimensional data
JP2001229362A (en) * 2000-02-17 2001-08-24 Nippon Telegr & Teleph Corp <Ntt> Information clustering device and recording medium with information clustering program recorded

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6134537A (en) * 1995-09-29 2000-10-17 Ai Ware, Inc. Visualization and self organization of multidimensional data through equalized orthogonal mapping
US5809490A (en) * 1996-05-03 1998-09-15 Aspen Technology Inc. Apparatus and method for selecting a working data set for model development
JP3943666B2 (en) 1996-09-03 2007-07-11 日本テトラパック株式会社 Cutter cleaning device for filling machine
US6035057A (en) * 1997-03-10 2000-03-07 Hoffman; Efrem H. Hierarchical data matrix pattern recognition and identification system
US6012058A (en) * 1998-03-17 2000-01-04 Microsoft Corporation Scalable system for K-means clustering of large databases
US6260036B1 (en) * 1998-05-07 2001-07-10 Ibm Scalable parallel algorithm for self-organizing maps with applications to sparse data mining problems
US6581058B1 (en) * 1998-05-22 2003-06-17 Microsoft Corporation Scalable system for clustering of large databases having mixed data attributes

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6226408B1 (en) * 1999-01-29 2001-05-01 Hnc Software, Inc. Unsupervised identification of nonlinear data cluster in multidimensional data
JP2001229362A (en) * 2000-02-17 2001-08-24 Nippon Telegr & Teleph Corp <Ntt> Information clustering device and recording medium with information clustering program recorded

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"IEEE International conference on Fuzzy Systems (Cat. No. 92CH3073-4), San Diego, CA, USA, 8-12 March 1992", NEW YORK, ISBN: 0-7083-0236-2, article BEDZEK J.C. ET AL.: "Fuzzy kohonen clustering networks", pages: 1035 - 1043 *
"Proceedings of the IEEE SoutheastCon 2000 'Preparing for the New Millennium' (Cat. No. 00CH37105), Nashville, USA 7-9 April 2000", ISBN: 0-7803-6312-4, article LEE H.S. ET AL.: "An investigation into unsupervised clustering techniques", pages: 124 - 130 *
PATENT ABSTRACTS OF JAPAN vol. 200, no. 25 12 April 2001 (2001-04-12) *
VESANTO J. ET AL.: "Clustering of the Self-Organizing Map", IEEE TRANS. NEURAL NETW. (USA), vol. 11, no. 3, May 2000 (2000-05-01), pages 586 - 600, XP002937802, DOI: doi:10.1109/72.846731 *

Also Published As

Publication number Publication date
FI20020414A0 (en) 2002-03-04
US20050102301A1 (en) 2005-05-12
US7809726B2 (en) 2010-10-05
AU2003208735A1 (en) 2003-09-16
FI20020414A (en) 2003-09-05

Similar Documents

Publication Publication Date Title
US7809726B2 (en) Mechanism for unsupervised clustering
EP1934941B1 (en) Bi-directional tracking using trajectory segment analysis
Mulier et al. Self-organization as an iterative kernel smoothing process
CN107798379B (en) Method for improving quantum particle swarm optimization algorithm and application based on improved algorithm
Kang et al. Dynamic random walk for superpixel segmentation
Indira et al. Image segmentation using artificial neural network and genetic algorithm: a comparative analysis
KR102339727B1 (en) Robust visual object tracking based on global and local search with confidence estimation
US20180293486A1 (en) Conditional graph execution based on prior simplified graph execution
CN108399105B (en) Software and hardware partitioning method based on improved brainstorming algorithm
US8422802B2 (en) Robust large-scale visual codebook construction
CN110991621A (en) Method for searching convolutional neural network based on channel number
KR102305575B1 (en) Method and system for highlighting similar areas using similarity between images
CN115496138A (en) Self-adaptive density peak value clustering method based on natural neighbors
WO2022051908A1 (en) Normalization in deep convolutional neural networks
Alahakoon Controlling the spread of dynamic self organising maps
Peng et al. Delayed reinforcement learning for adaptive image segmentation and feature extraction
JP5130523B2 (en) Information processing apparatus, information processing method, and program
Ardilla et al. Batch Learning Growing Neural Gas for Sequential Point Cloud Processing
Zhang et al. Color clustering using self-organizing maps
Masuyama et al. Growing neural gas with correntropy induced metric
Ferone et al. Variable width rough-fuzzy c-means
WO2023067792A1 (en) Information processing device, information processing method, and recording medium
WO2024047758A1 (en) Training data distribution estimation program, device, and method
US20220405599A1 (en) Automated design of architectures of artificial neural networks
CN110796189A (en) Clustering method of two-dimensional space points

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 10506634

Country of ref document: US

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP