|Publication number||US6728690 B1|
|Application number||US 09/448,408|
|Publication date||Apr 27, 2004|
|Filing date||Nov 23, 1999|
|Priority date||Nov 23, 1999|
|Publication number||09448408, 448408, US 6728690 B1, US 6728690B1, US-B1-6728690, US6728690 B1, US6728690B1|
|Inventors||Christopher A. Meek, John C. Platt|
|Original Assignee||Microsoft Corporation|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (4), Non-Patent Citations (5), Referenced by (90), Classifications (11), Legal Events (8)|
|External Links: USPTO, USPTO Assignment, Espacenet|
This invention relates generally to the field of classifiers and in particular to a trainer for classifiers employing maximum margin back-propagation with probability outputs.
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever. The following notice applies to the software and data as described below and in the drawing hereto: Copyright©1999, Microsoft Corporation, All Rights Reserved.
The problem of determining how to classify things is well known. Humans have an innate ability to classify things quickly. Many intelligence tests are based on this ability. For a human, it is quite simple to determine whether a coin is a penny, a nickel or other denomination. We can tell whether or not an email is junk mail or spam, which we don't want to read, or if it is an email from a friend or customer, that we need to read quickly. However, many humans do not have a comprehensive knowledge of how to determine whether the word “affect” or “effect” should be used when writing a sentence. There have been many attempts to program computers to classify things. The math involved can get quite complex. It was difficult early on to get a computer to classify coins, but neural networks, which were trained with many sets of input could do a pretty good job. The training for such simple classifiers was fairly straight forward due to the simplicity of the classifications.
However, classifying things as junk email or spam is much more difficult, and involves the use of complex mathematical functions. The rest of this background section describes some prior attempts at training classifiers. They involved the selection and modification of parameters for functions that compute indications of whether or not a particular input is likely in each of multiple categories. They involve the use of training sets, which are fed into classifiers, whose raw output is processed to modify the parameters. After several times through the training set, the parameters converge on what is thought to be a good set of parameters that when used in the classifier on input not in the training set, produces good raw output representative of the proper categories.
This raw output however, might not be directly related to the percentage likelihood that a particular category is the right one. This has lead to the use of a further function, called a probability transducer, that takes the raw output, and produces a probability for each category for the given input. There are many problems that remain with such systems. It is difficult to produce one that provides good performance, and does not over fit to the training data. Over fitting to the training data can produce wrong results, unless a very large amount of training data is provided. This, however, severely impacts performance.
A standard back-propagation architecture is shown in prior art FIG. 1. Functions used in this description are indicated by numbers in parentheses for ease of reference. Back-propagation is well-known in the art. It is described fully in section 4.8 in the book “Neural Networks for Pattern Recognition” by Christopher M. Bishop, published by Oxford Press in 1995. The purpose of a back-propagation architecture is to train a classifier 105 (and associated probability transducer 125) to produce probabilities of category membership given an input. The classifier 105 is trained by example. A training set comprising training inputs 100 is provided to the classifier 105. The classifier 105 has parameters 110 which are like adjustable coefficients for functions used to calculate raw outputs 120 indicative of class membership. The parameters 110 are adjusted so that the final probability outputs 135 are close to a set of desired outputs 145 for the training set. Given enough training inputs 100 and associated desired outputs 145, and a sensible choice of classifier 105, the overall system will generalize correctly to inputs that are not identical to those found in training inputs 100. In other words, it will produce a high probability that the input belongs in the correct category.
The functional form of classifier 105 can take many forms. The output 120 of the classifier 105 can be a single value, if the classifier is designed to make a single category probability. Alternatively, the output 120 of the classifier 105 can be a plurality of values, if the classifier is designed to produce probabilities for multiple categories. For purposes of discussion, the output 120 of a classifier 105 is denoted by a vector yi, regardless of whether output 120 is a single output or a plurality of outputs. Similarly, one example of training input is denoted by a vector xj, regardless of the true dimensionality of the training input.
Back-propagation is applicable to any classifier 105 that has a differentiable transfer function that is parameterized by one or more parameters 110. One typical functional form of classifier 105 is linear:
where wij and bi are the parameters 110, j is the number of classes, and there are no internal state variables 115. Another typical functional form of classifier 105 is a linear superposition of basis functions:
where the Wik, bi, and θk are the parameters 110, and the Φk are the internal state of parameters 115. The parameterized functions Φk can take on many different forms, such as Gaussians or logistic functions, or multi-layer perceptrons, as is well-known in the art (see chapters 4 and 5 of the Bishop book). Other possible forms (e.g., convolutional neural networks) are known in the art.
The desired output 120 of a classifier 105 is a probability of a category 135. These probability outputs are denoted as pi, again regardless of whether there is one probability value or a plurality of probability values. Typically, a probability transducer 125 is applied to the output 120 of the classifier 105 to convert it into a probability. When there is only one output or when the outputs are not mutually exclusive categories, a sigmoidal (or logit) function is typically used:
where Ai is typically fixed to −1, while Bi is typically fixed to 0. The parameters Ai and Bi are the fixed parameters 130. When a logit probability transducer is coupled to a linear classifier, the overall system performs a classical logistic regression. When the classifier 105 is deciding between a plurality of mutually exclusive categories, a softmax or winner-take-all function is often used:
where Ai is typically fixed to +1 and Bi is typically fixed to 0. Again, the parameters Ai and Bi are the fixed parameters 130.
The back-propagation architecture will attempt to adjust the parameters of the classifier 110 to cause the probability outputs 135 to be close to the desired outputs 145. The values of the desired outputs 145 are denoted as ti. The values of the desired outputs 145 are typically (although not always) either 0 or 1, depending on whether a specific training input 100 belongs to the category or not.
The closeness of the probability output 135 to the desired output 145 is measured by an error metric 140. The error metric computes an error function for one particular training input, whose corresponding parameters and values are denoted with a superscript (n). The error metric computes a function E(n)(pi (n), ti (n)) which is at a minimum when pi (n) matches ti (n). The output of the error metric is actually an error gradient 150:
The are many error metrics used in the prior art. For example, the squared error can be used:
or the cross-entropy score for use with probability transducer (3):
or the entropy score for use with probability transducer (4):
The use of error metric (7) combined with probability transducer (3) or error metric (8) combined with probability transducer (4) is that the output of the probability transducer is trained to be a true posterior probability estimate of category given input data.
Previously, researchers such as Sontag and Sussman in “Back propagation separates where Perceptrons do” in the journal Neural Networks, volume 4, pages 243-249, (1991), have suggested using a margin error metric as error metric 140. The gradient (5) of a margin error metric is shown in FIG. 2. A margin error metric is defined as, for positive examples in the category, having a negative gradient below an output level M+ and an exactly zero gradient above M+, while for negative examples out of the category, having a positive gradient above an output level M− and an exactly zero gradient below M−. The threshold M+ must be strictly greater than the threshold M−. A margin error metric was originally proposed by Sontag and Sussmann to ensure that data sets that are linearly separable would be cleanly separated by a back-propagation algorithm. The disadvantage of such a margin error metric is that the outputs are no longer probabilities.
The gradient computation 155 computes the partial derivative of the error E with respect to all of the parameters of the classifier by using the chain rule. The gradient computation 155 uses the gradient of the error 150, the probability outputs 135, the parameters 110, and any internal state 115 of the classifier 105. The computation of the gradient with back-propagation is well-known in the art: see section 4.8 of Bishop's book. The output of the gradient computation is denoted as Gi (n), where i is the index of the ith free parameter 110 of the classifier 105 and n is the index into the training set.
Once the Gi (n) are computed, the parameters 110 should be updated to reduce the error. The update rule 160 will update the parameters 110. There are any number of updating rules that cause the error to be reduced. One style of update rule 160 updates the parameters 110 every time a training input 100 is presented to classifier 105. Such update rules are called “on-line.” Another style of update rule computes the true gradient of the error over the entire training set with respect to free parameters 110, by summing the Gi (n) over the index n of all training inputs 100, then updating the parameters after the sum is computed. Such update rules are called “batch”.
One example of an update rule is the stochastic gradient descent rule, where the parameters 110 are adjusted by a small step in the direction that will improve the overall error. We will denote the ith parameter 110 of classifier 105 as γi:
The step size η can be held constant, or decrease with time, as is well-known in the art. The convergence of stochastic gradient descent can be improved via momentum, as is well-known in the art:
Either stochastic gradient descent or stochastic gradient descent with momentum can be used in either on-line or batch mode. In batch mode, the term Gi (n) is replaced with a sum of Gi (n) over all n. Other numerical algorithms can be used in batch mode, such as conjugate gradient, pseudo-Newton, or Levenberg-Marquardt. Chapter 7 of Bishop's book describes many possible numerical algorithms to minimize the error.
Simply minimizing the error on the training set can often lead to paradoxically poor results. The classifier 105 will work very well on inputs that are in the training set, but will work poorly with inputs that are away from the training set. This is known as overfitting: by minimizing only the error on the training set, the error off the training set is only indirectly minimized, and could be high.
There are many algorithms to avoid overfitting. One very simple algorithm is to penalize parameters 110 that have large value (see Bishop, section 9.2). This algorithm is known as weight decay. Weight decay 165 is shown in FIG. 1: it uses the current values of the parameters to modify the update rule.
The reasoning behind weight decay is that the correct classifier should be the most likely classifier given the data. According to Bayes' rule, the posterior probability of classifier given data is proportional to the probability of the data given the classifier (the likelihood) multiplied by the prior probability of the classifier. The error on the training set is commonly interpreted as a log likelihood: to account for the prior, a log prior must be added to the error. If the prior probability over parameters 110 is a Gaussian with mean zero, then the log prior is a quadratic penalty on the parameters 110. The derivative of a quadratic is linear, so a Gaussian prior over the parameters 110 adds a linear term to the update rule. With weight decay, the update rule (9) becomes:
while update rule (10) becomes:
People of ordinary skill in the art will understand how to modify an update rule (such as conjugate gradient) to reflect a weight decay term.
The architecture in FIG. 1 is limited, because the generalization error for data not in the training set is not directly minimized. Section 10.1 of the book “Statistical Learning Theory” by Vladimir Vapnik (published by Wiley Inter-science in 1998) proposes using a support vector machine: a linear classifier (1) with only one output value trained via the following quadratic programming problem:
where wj is the weight of the input Xj contributing to the single output, γ(n) is the output value of the classifier when the input is the nth training example; while T(n) is the desired output of the classifier for the nth training example, T(n) is +1 for positive examples in the category and −1 for negative examples out of the category. Vapnik proposes this quadratic programming problem in order to directly minimize a bound on the error of the classifier on inputs not in the training set. When the quadratic optimization problem (13) is solved, then the weights wi and the threshold b are the optimal hyperplane that splits inputs in the category from inputs out of the category.
The architecture to solve the constrained optimization problem (13) is shown in prior art FIG. 3, where the training inputs 100 are fed to a classifier 105 to produce raw outputs 120. The input 100 and outputs 120 and desired outputs 145 are all provided to a quadratic programming solver 200, which updates the parameters 110.
Section 10.5 of Vapnik's book also describes extension of the quadratic programming problem (13) to non-linear support vector machines. Those extensions are limited to those non-linear classifiers of form (2) whose basis functions Φ obey a set of conditions, called Mercer's conditions. Such a non-linear extension has the same architecture as the linear case, as shown in FIG. 3: the matrix used in the quadratic programming 200 is different than the linear case.
Sections 7.12 and 11.11 of Vapnik's book describe a method for mapping the output of a classifier trained via constrained optimization problem (13) into probabilities, using an architecture also shown in FIG. 3. After the parameters 100 of the classifier 105 are completely determined, the raw outputs 120 of the final classifier are fed to a probability transducer 125. The probability transducer suggested by Vapnik is a linear blend of cosines. The blending coefficients are the parameters of the probability transducer 210. These parameters are determined from statistical measurements of the raw outputs 120 and the desired outputs 145 involving Parzen window kernels (see Vapnik, sections 7.11 and 7.12).
Vapnik's support vector machine learning architecture produces classifiers that perform very well on inputs not in the training set. However, only a limited array of non-linear classifiers can be trained. Also, the number of non-linear basis functions Φ used in a support vector machine tends to be much larger than the number of basis functions used in a back-propagation network, so that support vector machines are often slower at run time. Also, the probability transducer proposed by Vapnik may not yield a probability function that is monotonic with raw output, which is a desirable feature. Nor are the probabilities constrained to lie between 0 and 1, nor sum to 1 across all classes.
Prior art FIG. 4 shows another example of prior art disclosed by Denker and LeCun in the paper “Transforming Neural-Net Output Levels to Probability Distributions”, which appears in the Advances in Neural Information Processing Systems conference proceedings, volume 3, pages 853-859. Here, a classifier 105 is trained using the standard architecture shown in FIG. 1. Then, a new probability transducer is trained from a calibration set of input data, separate from the training set. Denker and LeCun then suggest that the probability transducer 125 has its parameters 210 determined by statistical measurements 225. In this case, the statistical measurements need to know the gradient and the Hessian of the error metric 140 with respect to the parameters 110. These measurements then determine the parameters of the new probability transducer, which is either a Parzen window non-parametric estimator or a softmax function (4) with parameters Ai and Bi determined from the statistical measurements 225.
The limitation of the Denker and LeCun method using Parzen windows is that Parzen windows take a lot of memory and may be slow at run-time and may not yield a probability output that is monotonic in the raw output. The problem with the softmax estimator with parameters derived from the statistical measurements 225 is that the assumptions that derive these parameters (that the outputs for every category are spherical Gaussians with equal variance) may not be true, hence the parameters may not be optimal.
There is a need for a learning architecture that is as applicable as back-propagation to many classifier functional forms, while still providing the mathematical assurance of support vector machines of good performance on inputs not in the training set. There is a further need for a system that yields true posterior probabilities that are monotonic in the raw output of the classifier and that does not require a non-parametric probability transducer. There is a need for a training system that minimizes a bound on expected testing error, which means that it avoids overfitting.
A training system for a classifier utilizes both a back-propagation system to iteratively modify parameters of functions which provide raw output indications of desired categories, wherein the parameters are modified based on weight decay, and a probability determining system with further parameters that are determined during iterative training. The raw output of the back-propagation system provide-either a high or low indication for each category for a given input, while the parameters for the probability determining system, combined with the raw outputs, are used to determine the percentage likelihood of each classification for a given input.
In one aspect of the invention, the back-propagation system uses a margin error metric combined with weight decay. The probability determining system uses a sigmoid to calibrate the raw outputs to probability percentages for each category.
The training system may be used with multiple applications in order to determine the probability that some item fits in one of potentially several categories, or just one category. One practical application involves identifying email as spam or junk email versus email people want to receive, or in grammar checking programs such as determining whether the word “affect” or “effect” is correct based on the preceding 100 words.
A method of training such a system involves gathering a training set of inputs and desired corresponding outputs. Classifier parameters are then initialized and an error margin is calculated with respect to the classifier parameters. A weight decay is then used to adjust the parameters. After a selected number of times through the training set, the parameters are deemed in final form, and an optimization routine is used to derive a set of probability transducer parameters for use in calculating the probable classification for each input.
The training system minimizes a bound on expected testing error, which means that it avoids overfitting so that training points which are outside normal ranges do not adversely affect the parameters. It provides a compact function with a minimal number of parameters for a given accuracy. In addition, the system provides simultaneous training of all the parameters for all the classes as opposed to prior systems which required individual training of parameters. The result is a simple and fast to evaluate probabalistic output that is monotonic.
FIG. 1 is a prior art block diagram of a standard back-propagation architecture.
FIG. 2 is a prior art graph of a gradient of a margin error metric.
FIG. 3 is a prior art block diagram of a classification system using a quadratic programming solver to update parameters.
FIG. 4 is a prior art block diagram of a classification system similar to that shown in FIG. 1 with the addition of a probability transducer calibrated separate from a training set.
FIG. 5 is a block diagram of a computer system implementing the present invention.
FIG. 6 is a block diagram of a training system in accordance with the present invention.
FIG. 7 is a block diagram of an alternative embodiment of the training system of the present invention.
FIG. 8 is a flowchart illustrating a method of generating sets of parameters for the training system in accordance with the present invention.
In the following detailed description of exemplary embodiments of the invention, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration specific exemplary embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that logical, mechanical, electrical and other changes may be made without departing from the spirit or scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims.
The detailed description is divided into multiple sections. In the first section, a hardware and operating environment in conjunction with which embodiments of the invention may be practiced are described. In the second section, the architecture of the classification training system is described along with alternative descriptions. A third section describes a method of using the training system, followed by a conclusion which describes some potential benefits and describes further alternative embodiments.
FIG. 5 provides a brief, general description of a suitable computing environment in which the invention may be implemented. The invention will hereinafter be described in the general context of computer-executable program modules containing instructions executed by a personal computer (PC). Program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Those skilled in the art will appreciate that the invention may be practiced with other computer-system configurations, including hand-held devices, multiprocessor systems, microprocessor-based programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like which have multimedia capabilities. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
FIG. 5 shows a general-purpose computing device in the form of a conventional personal computer 20, which includes processing unit 21, system memory 22, and system bus 23 that couples the system memory and other system components to processing unit 21. System bus 23 may be any of several types, including a memory bus or memory controller, a peripheral bus, and a local bus, and may use any of a variety of bus structures. System memory 22 includes read-only memory (ROM) 24 and random-access memory (RAM) 25. A basic input/output system (BIOS) 26, stored in ROM 24, contains the basic routines that transfer information between components of personal computer 20. BIOS 26 also contains start-up routines for the system. Personal computer 20 further includes hard disk drive 27 for reading from and writing to a hard disk (not shown), magnetic disk drive 28 for reading from and writing to a removable magnetic disk 29, and optical disk drive 30 for reading from and writing to a removable optical disk 31 such as a CD-ROM or other optical medium. Hard disk drive 27, magnetic disk drive 28, and optical disk drive 30 are connected to system bus 23 by a hard-disk drive interface 32, a magnetic-disk drive interface 33, and an optical-drive interface 34, respectively. The drives and their associated computer-readable media provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for personal computer 20. Although the exemplary environment described herein employs a hard disk, a removable magnetic disk 29 and a removable optical disk 31, those skilled in the art will appreciate that other types of computer-readable media which can store data accessible by a computer may also be used in the exemplary operating environment. Such media may include magnetic cassettes, flash-memory cards, digital versatile disks, Bernoulli cartridges, RAMs, ROMs, and the like.
Program modules may be stored on the hard disk, magnetic disk 29, optical disk 31, ROM 24 and RAM 25. Program modules may include operating system 35, one or more application programs 36, other program modules 37, and program data 38. A user may enter commands and information into personal computer 20 through input devices such as a keyboard 40 and a pointing device 42. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 21 through a serial-port interface 46 coupled to system bus 23; but they may be connected through other interfaces not shown in FIG. 5, such as a parallel port, a game port, or a universal serial bus (USB). A monitor 47 or other display device also connects to system bus 23 via an interface such as a video adapter 48. A video camera or other video source may be coupled to video adapter 48 for providing video images for video conferencing and other applications, which may be processed and further transmitted by personal computer 20. In further embodiments, a separate video card may be provided for accepting signals from multiple devices, including satellite broadcast encoded images. In addition to the monitor, personal computers typically include other peripheral output devices (not shown) such as speakers and printers.
Personal computer 20 may operate in a networked environment using logical connections to one or more remote computers such as remote computer 49. Remote computer 49 may be another personal computer, a server, a router, a network PC, a peer device, or other common network node. It typically includes many or all of the components described above in connection with personal computer 20; however, only a storage device 50 is illustrated in FIG. 1. The logical connections depicted in FIG. 1 include local-area network (LAN) 51 and a wide-area network (WAN) 52. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
When placed in a LAN networking environment, PC 20 connects to local network 51 through a network interface or adapter 53. When used in a WAN networking environment such as the Internet, PC 20 typically includes modem 54 or other means for establishing communications over network 52. Modem 54 may be internal or external to PC 20, and connects to system bus 23 via serial-port interface 46. In a networked environment, program modules, such as those comprising Microsoft® Word which are depicted as residing within 20 or portions thereof may be stored in remote storage device 50. Of course, the network connections shown are illustrative, and other means of establishing a communications link between the computers may be substituted.
Software may be designed using many different methods, including object oriented programming methods. C++ and Java are two examples of common object oriented computer programming languages that provide functionality associated with object oriented programming. Object oriented programming methods provide a means to encapsulate data members (variables) and member functions (methods) that operate on that data into a single entity called a class. Object oriented programming methods also provide a means to create new classes based on existing classes.
An object is an instance of a class. The data members of an object are attributes that are stored inside the computer memory, and the methods are executable computer code that act upon this data, along with potentially providing other services. The notion of an object is exploited in the present invention in that certain aspects of the invention are implemented as objects in one embodiment.
An interface is a group of related functions that are organized into a named unit. Each interface may be uniquely identified by some identifier. Interfaces have no instantiation, that is, an interface is a definition only without the executable code needed to implement the methods which are specified by the interface. An object may support an interface by providing executable code for the methods specified by the interface. The executable code supplied by the object must comply with the definitions specified by the interface. The object may also provide additional methods. Those skilled in the art will recognize that interfaces are not limited to use in or by an object oriented programming environment.
In FIG. 6, a block diagram of a training system uses the mathematics of support vector machines in equation (13) applied to back-propagation. The quadratic programming problem (13) is equivalent to the non-differentiable error function for one output:
where the ( . . . )+ function denotes a function which returns its argument if positive, and zero otherwise and f(n) is either +1 or −1. If the classifier function is of the form of equation (1), then the first term in (14) can be identified as a weight decay term with strength 1/C. If the classifier function is of the form of equation (2), then the first term in (14) can be identified as a weight decay with strength 1/C only for the wi parameters: the weights of the last layer. The derivative of the second term is exactly the margin error function shown in FIG. 2, with M+=1 and M.=−1 and the magnitude of the non-zero part of the gradient=1. Those of ordinary skill in the art have avoided the minimization problem (14) due to its non-differentiability. However, this invention teaches that stochastic gradient descent (9) or (10) still works with non-differentiable functions and hence has advantages for solving classification problems.
The error function (14) can be generalized to more than one output:
This generalized error function is very useful: via back-propagation, it causes the last layer to become an optimal hyperplane, while forcing the basis functions Φ to form a mapping that separates the data maximally. Hence, the basis functions are adapted to the problem, unlike standard support vector machines, where the basis functions are fixed ahead of time to be centered on the training examples.
The architecture of the invention is shown in FIG. 6. Training inputs 600 are provided to a classifier 605 with a differentiable transfer function. In one embodiment, the classifier 605 computes equation (2), with non-linear basis functions:
The raw outputs 620 of the classifier are then fed to a margin error metric 695. Generally, the margin error metric 695 can compute any margin error metric. In the preferred embodiment, margin error metric 695 computes the error gradient 650:
which is shown in FIG. 2, where H is the Heaviside step function. Note that the error gradient 650 is pre-scaled by η in order to save computation. This error gradient is then fed to a gradient computation 655, which produces a gradient which gets fed to an update rule 660 to update the parameters 610. In addition, in order to ensure that the last layer of classifier 605 computes an optimal hyperplane, weight decay 665 is required for at least the wik parameters. Other weight decay terms can optionally be added.
In one embodiment, the gradient computation 655, the update rule 660, and the weight decay 665 are combined into the operations (18), (19), and (20), shown below. Stochastic gradient descent with momentum is used. The operation (18) updates the last layer weights and thresholds.
where N is the number of training examples. One step of back-propagation is then applied to use the internal state 615 to help compute the gradients for the first layer:
The updates for the first layer are then computed:
Those of ordinary skill in the art can generalize (18), (19), and (20) to single-layer classifiers of form (1) (by using only operation (18) and substituting xj (n) for Φj (n)) and multi-layer classifiers with alternative Φ.
An alternative margin error function 695 can produce an error gradient 650 of the form:
The form (21) is used when making a type I error has a different cost than making a type II error. For example, in detecting junk e-mail, a classifier error that mistakes a real mail message for a junk message is more serious than mistaking a junk message for a real mail message. If a classifier 605 is being built to distinguish junk from real e-mail, and junk e-mail is defined to be the positive category, then η2 should be substantially larger than η1.
The use of a margin error function 695 and a weight decay function 665 provides excellent generalization performance, however the raw outputs 620 are not interpretable as probabilities. For some applications, this is acceptable, although can provide sub-optimal decision boundaries. In one embodiment, a probability transducer 625 is added to the system to produce probability outputs 635. In one embodiment, probability transducer 625 has form (3) when there is only one output or the outputs are not mutually exclusive and form (4) when the outputs are mutually exclusive.
The parameters 685 of the probability transducer 625 can be determined from the training set. This is preferable to a calibration set, since it requires less data, although a separate calibration set can be used. Instead of making assumptions about the probability densities of the raw output given the category label, the parameters 685 are found that estimate posterior probabilities directly. In other words, the probability transducer 625 is trained like a small neural network (with only two parameters per category in one embodiment). The system uses a probability scoring function 680 that compares the desired outputs 645 to the probability outputs 635. The mismatch between the probability outputs 635 and the desired outputs 645 is fed to an iterative optimization algorithm 690, which produces a set of parameters 685 which minimizes the mismatch between the probability outputs 635 and the desired outputs 645. A sophisticated optimization algorithm can be used, due to the small number of parameters that need to be optimized. If probability transducer 625 has form (3), then finding the parameters for each category is a separable problem and the optimization algorithm 690 can be called once for every category.
The type of probability scoring function 680 depends on the form of the probability transducer 625. If the probability transducer 625 has form (3), then the scoring function 680 would likely have the form:
where the ri (n) are derived below. If the probability transducer 125 has form (4), then the scoring function 680 would have form:
Equations (22) and (23) are very similar to equations (7) and (8). Indeed, one possible method is to set ri (n)=ei (n). The problem with this equality is that it could lead to overfitting of the probability transducer 625. One example of overfitting is when all of the negative examples have negative raw outputs while all of the positive examples have positive raw outputs. In this case, the error function (22) is minimized when probability transducer equation (3) has Ai=−∞ and Bi=0, which yields probabilities that are either 0 or 1.
In order to not overfit the probability transducer, a Bayesian prior must be placed on the iterative optimization 690. In one embodiment, it is assumed that every data point has a non-zero probability of having the opposite label. The probability of this label flip will go to zero as the number of data points of a given label goes to infinity. In other words, the value ri (n) is the posterior probability of the data point being in the class, given a uniform prior over probability of being in the class. If ti (n)=1 (a positive example in the category), then
where N+ is the number of positive examples in the training set. If ii (n)=0 (a negative example not in the category), then
where N− is the number of negative examples in the training set. Other methods known in the art of regularizing the fit can be used.
Probability scoring functions (22) or (23) provide a maximum likelihood parameter estimate given that the training labels were drawn independently from a multinomial distribution with probability given by the output of the probability
transducer 125. In practice, these functions give good fit to probabilities near 0 and 1 and poorer fit to probabilities near 0.5. In an alternative implementation, when producing more accurate probabilities near 0.5 is important, the following scoring functions can be used:
where scoring function (26) is used with probability transducer form (3) and scoring function (27) is used with probability transducer form (4). The scoring functions (26) and (27) are known in the art, but have not been used in the context of the present invention. The value k is typically 0.05.
Iterative optimization algorithm 690 can be selected amongst any of a number of unconstrained optimization algorithms, including conjugate gradient, variable metric methods, or Newton methods. In one embodiment, optimization algorithm 690 is a second-derivative trust-region method, as is known in the art. See the book Practical Optimization written by Phillip Gill, Walter Murray, and Margaret Wright, page 113-115, published by Academic Press in 1981.
An alternative embodiment is shown in FIG. 7, where a standard support vector machine architecture (training inputs 700, classifier 705, raw outputs 720, classifier parameters 710, and quadratic programming 750) is combined with the probability transducer learning architecture of FIG. 6 (probability transducer 725, probability outputs 735, probability scoring function 760, optimization algorithm 770, parameters 755). The alternative embodiment provides the standard support vector machine model with a monotonic probability transducer. The probability transducer 725 can implement equations (3) or (4). The probability scoring function 760 can implement equations (22), (23), (26), or (27).
FIG. 8 shows a flowchart for a computer program used to determine the parameters for the classifier of the present invention. The program is one which can be comprised of modules or other program elements, stored on the random access memory of the computer system of FIG. 5 and run or executed on the processor therein. It may also be communicated on a carrier wave or stored on some other form of computer readable medium for distribution to further computer systems.
At 810, a training set of inputs and desired outputs is gathered using methods well known in the art. A sufficient number of such inputs and desired outputs will depend on the complexity of the classification to be performed, and will be higher with a greater complexity. At 820, classifier parameters are initialized in one of many known manners. They may be randomly selected within certain ranges, or may actually have predetermined values as desired. At 825, a classifier using current parameters is run against a chosen input. At 830, a margin error metric is computed using the desired output, and at 835, a gradient of error with respect to the current parameters is computed, followed by a computation of weight decay for the parameters at 840. The parameters are then adjusted at 850, followed by a determination at 860 as to whether the learning is done. If not done, control returns to block 825, and the process is repeated to further modify the parameters. This determination can be based on a selected number of runs through the training set. Up to 800 or more runs should produce adequate stability of the parameters. Other numbers of runs may be significantly more or less depending on how clearly the data sets are separable into different categories. A further way of determining whether the learning is done comprises running a classifier with the parameters against an unused portion of the data set after each learning loop until a desired accuracy is obtained. Other methods apparent to those skilled in the art and may be used without departing from the invention.
Once the determination that learning is complete at 860 is made, a data set of raw classifier outputs and desired corresponding outputs is collected at 865 for use in running an optimization algorithm to derive the probability transducer parameters at 870. This ends the training session, and results in two sets of parameters, which when combined in respective classifier and probability transducer provide a probability for a given input for each category.
A training system provides parameters for a parameterized classifier which provides raw outputs. Parameters for a separately trained probability transducer are also provided. The probability transducer takes raw outputs from the classifier, which employs either support vector machines using quadratic programming or a margin error metric with weight decay to produce a classifier whose raw output is not directly correlated to percentage probability that an input should be classified in a given category. The probability transducer is separately trained on this output using either the same training set, or subset thereof, or a calibration set. The training of the probability transducer parameters is similar to classic neural net training, in that two parameters per category are trained based on minimizing the mismatch between the probabilities generated, and the desired output. The parameters are used in a selected sigmoid function to generate the probabilities.
The training system is utilized to provide classification systems for many different types of classifiers. Examples include use in determining whether an email is one which a user would not normally like to waste time on, referred to as junk email or spam, and also can be used in many other practical applications, including grammar checkers. Further, a set of parameters in and of itself has practical application in that is can be transmitted via machine readable medium, including carrier waves to a classification system as described above to enable the classification system to properly categorize input. The system yields true posterior probabilities that are monotonic in the raw output of the classifier and does not require a non-parametric probability transducer.
This application is intended to cover any adaptations or variations of the present invention. It is manifestly intended that this invention be limited only by the claims and equivalents thereof.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US5649068 *||May 16, 1996||Jul 15, 1997||Lucent Technologies Inc.||Pattern recognition system using support vectors|
|US6161130 *||Jun 23, 1998||Dec 12, 2000||Microsoft Corporation||Technique which utilizes a probabilistic classifier to detect "junk" e-mail by automatically updating a training and re-training the classifier based on the updated training set|
|US6192360 *||Jun 23, 1998||Feb 20, 2001||Microsoft Corporation||Methods and apparatus for classifying text and for building a text classifier|
|US6327581 *||Apr 6, 1998||Dec 4, 2001||Microsoft Corporation||Methods and apparatus for building a support vector machine classifier|
|1||Bishop, C.M., Neural Networks for Pattern Recognition, Oxford University Press, ISBN 0198538642, 1-456, (1995).|
|2||David D. Lewis and Jason Catlett, "Heterogeneous Uncertainty Sampling for Supervised Learning", Machine Learning: Proceedings of the Eleventh International Conference, Morgan Kaufmann Publishers, San Francisco, CA, pp. 148-156, 1994.|
|3||Denker, J.S., et al. "Transforming Neural-Net Output Levels to Probability Distributions", Advances in Neural Information Processing System 3, 853-859.|
|4||Sontag, E.D., et al., "Back Propagation Separates Where Perceptrons Do", Rutgers Center for Systems and Control, Neural Networks, vol. 4, No. 2-ISSN 0893-6080, 243-249, (1991).|
|5||Sontag, E.D., et al., "Back Propagation Separates Where Perceptrons Do", Rutgers Center for Systems and Control, Neural Networks, vol. 4, No. 2—ISSN 0893-6080, 243-249, (1991).|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US6954326 *||Sep 26, 2002||Oct 11, 2005||Seagate Technology Llc||Dynamically improving data storage device performance|
|US7219148||Mar 3, 2003||May 15, 2007||Microsoft Corporation||Feedback loop for spam prevention|
|US7249162||Feb 25, 2003||Jul 24, 2007||Microsoft Corporation||Adaptive junk message filtering system|
|US7272853||Jun 4, 2003||Sep 18, 2007||Microsoft Corporation||Origination/destination features and lists for spam prevention|
|US7296018 *||Jan 2, 2004||Nov 13, 2007||International Business Machines Corporation||Resource-light method and apparatus for outlier detection|
|US7395253||Apr 1, 2002||Jul 1, 2008||Wisconsin Alumni Research Foundation||Lagrangian support vector machine|
|US7406450 *||Feb 20, 2006||Jul 29, 2008||Nec Laboratories America, Inc.||Spread kernel support vector machine|
|US7409708||May 28, 2004||Aug 5, 2008||Microsoft Corporation||Advanced URL and IP features|
|US7421417 *||Aug 28, 2003||Sep 2, 2008||Wisconsin Alumni Research Foundation||Input feature and kernel selection for support vector machine classification|
|US7437338||Mar 21, 2006||Oct 14, 2008||Hewlett-Packard Development Company, L.P.||Providing information regarding a trend based on output of a categorizer|
|US7451123 *||Dec 8, 2005||Nov 11, 2008||Microsoft Corporation||Probability estimate for K-nearest neighbor|
|US7464264||Mar 25, 2004||Dec 9, 2008||Microsoft Corporation||Training filters for detecting spasm based on IP addresses and text-related features|
|US7478075 *||Apr 11, 2006||Jan 13, 2009||Sun Microsystems, Inc.||Reducing the size of a training set for classification|
|US7483947||May 2, 2003||Jan 27, 2009||Microsoft Corporation||Message rendering for identification of content features|
|US7519668||Jun 20, 2003||Apr 14, 2009||Microsoft Corporation||Obfuscation of spam filter|
|US7543053||Feb 13, 2004||Jun 2, 2009||Microsoft Corporation||Intelligent quarantining for spam prevention|
|US7558832||May 2, 2007||Jul 7, 2009||Microsoft Corporation||Feedback loop for spam prevention|
|US7561158 *||Jan 11, 2006||Jul 14, 2009||International Business Machines Corporation||Method and apparatus for presenting feature importance in predictive modeling|
|US7593904||Jun 30, 2005||Sep 22, 2009||Hewlett-Packard Development Company, L.P.||Effecting action to address an issue associated with a category based on information that enables ranking of categories|
|US7640313||Jul 17, 2007||Dec 29, 2009||Microsoft Corporation||Adaptive junk message filtering system|
|US7660865||Aug 12, 2004||Feb 9, 2010||Microsoft Corporation||Spam filtering with probabilistic secure hashes|
|US7664819||Jun 29, 2004||Feb 16, 2010||Microsoft Corporation||Incremental anti-spam lookup and update service|
|US7665131||Jan 9, 2007||Feb 16, 2010||Microsoft Corporation||Origination/destination features and lists for spam prevention|
|US7668789||Mar 30, 2006||Feb 23, 2010||Hewlett-Packard Development Company, L.P.||Comparing distributions of cases over groups of categories|
|US7711779||Jun 20, 2003||May 4, 2010||Microsoft Corporation||Prevention of outgoing spam|
|US7748038||Dec 6, 2004||Jun 29, 2010||Ironport Systems, Inc.||Method and apparatus for managing computer virus outbreaks|
|US7797282||Sep 29, 2005||Sep 14, 2010||Hewlett-Packard Development Company, L.P.||System and method for modifying a training set|
|US7836133 *||May 5, 2006||Nov 16, 2010||Ironport Systems, Inc.||Detecting unwanted electronic mail messages based on probabilistic analysis of referenced resources|
|US7854007||May 5, 2006||Dec 14, 2010||Ironport Systems, Inc.||Identifying threats in electronic messages|
|US7873583||Jan 19, 2007||Jan 18, 2011||Microsoft Corporation||Combining resilient classifiers|
|US7904517||Aug 9, 2004||Mar 8, 2011||Microsoft Corporation||Challenge response systems|
|US7930353||Jul 29, 2005||Apr 19, 2011||Microsoft Corporation||Trees of classifiers for detecting email spam|
|US8006157||Sep 28, 2007||Aug 23, 2011||International Business Machines Corporation||Resource-light method and apparatus for outlier detection|
|US8046832||Jun 26, 2002||Oct 25, 2011||Microsoft Corporation||Spam detector with challenges|
|US8065370||Nov 3, 2005||Nov 22, 2011||Microsoft Corporation||Proofs to filter spam|
|US8214438||Mar 1, 2004||Jul 3, 2012||Microsoft Corporation||(More) advanced spam detection features|
|US8224905||Dec 6, 2006||Jul 17, 2012||Microsoft Corporation||Spam filtration utilizing sender activity data|
|US8250159||Jan 23, 2009||Aug 21, 2012||Microsoft Corporation||Message rendering for identification of content features|
|US8260730||Mar 14, 2005||Sep 4, 2012||Hewlett-Packard Development Company, L.P.||Method of, and system for, classification count adjustment|
|US8290946 *||Jun 24, 2008||Oct 16, 2012||Microsoft Corporation||Consistent phrase relevance measures|
|US8321514||Dec 30, 2008||Nov 27, 2012||International Business Machines Corporation||Sharing email|
|US8364617||Jan 19, 2007||Jan 29, 2013||Microsoft Corporation||Resilient classification of data|
|US8533270||Jun 23, 2003||Sep 10, 2013||Microsoft Corporation||Advanced spam detection techniques|
|US8601080||Sep 27, 2012||Dec 3, 2013||International Business Machines Corporation||Sharing email|
|US8719073||Aug 25, 2005||May 6, 2014||Hewlett-Packard Development Company, L.P.||Producing a measure regarding cases associated with an issue after one or more events have occurred|
|US8996515 *||Sep 11, 2012||Mar 31, 2015||Microsoft Corporation||Consistent phrase relevance measures|
|US9000918||Mar 2, 2013||Apr 7, 2015||Kontek Industries, Inc.||Security barriers with automated reconnaissance|
|US9047290||Apr 29, 2005||Jun 2, 2015||Hewlett-Packard Development Company, L.P.||Computing a quantification measure associated with cases in a category|
|US9305079||Aug 1, 2013||Apr 5, 2016||Microsoft Technology Licensing, Llc||Advanced spam detection techniques|
|US9329876 *||May 20, 2009||May 3, 2016||Microsoft Technology Licensing, Llc||Resource aware programming|
|US9792359||Apr 29, 2005||Oct 17, 2017||Entit Software Llc||Providing training information for training a categorizer|
|US20040003283 *||Jun 26, 2002||Jan 1, 2004||Goodman Joshua Theodore||Spam detector with challenges|
|US20040167964 *||Feb 25, 2003||Aug 26, 2004||Rounthwaite Robert L.||Adaptive junk message filtering system|
|US20040177110 *||Mar 3, 2003||Sep 9, 2004||Rounthwaite Robert L.||Feedback loop for spam prevention|
|US20040215977 *||Feb 13, 2004||Oct 28, 2004||Goodman Joshua T.||Intelligent quarantining for spam prevention|
|US20040221062 *||May 2, 2003||Nov 4, 2004||Starbuck Bryan T.||Message rendering for identification of content features|
|US20040231647 *||Mar 5, 2004||Nov 25, 2004||Jorn Mey||Method for the operation of a fuel system for an LPG engine|
|US20040260776 *||Jun 23, 2003||Dec 23, 2004||Starbuck Bryan T.||Advanced spam detection techniques|
|US20040260922 *||Mar 25, 2004||Dec 23, 2004||Goodman Joshua T.||Training filters for IP address and URL learning|
|US20050021649 *||Jun 20, 2003||Jan 27, 2005||Goodman Joshua T.||Prevention of outgoing spam|
|US20050022008 *||Jun 4, 2003||Jan 27, 2005||Goodman Joshua T.||Origination/destination features and lists for spam prevention|
|US20050022031 *||May 28, 2004||Jan 27, 2005||Microsoft Corporation||Advanced URL and IP features|
|US20050049985 *||Aug 28, 2003||Mar 3, 2005||Mangasarian Olvi L.||Input feature and kernel selection for support vector machine classification|
|US20050160340 *||Jan 2, 2004||Jul 21, 2005||Naoki Abe||Resource-light method and apparatus for outlier detection|
|US20050204006 *||Mar 12, 2004||Sep 15, 2005||Purcell Sean E.||Message junk rating interface|
|US20050283837 *||Dec 6, 2004||Dec 22, 2005||Michael Olivier||Method and apparatus for managing computer virus outbreaks|
|US20060015561 *||Jun 29, 2004||Jan 19, 2006||Microsoft Corporation||Incremental anti-spam lookup and update service|
|US20060031338 *||Aug 9, 2004||Feb 9, 2006||Microsoft Corporation||Challenge response systems|
|US20060112042 *||Dec 8, 2005||May 25, 2006||Microsoft Corporation||Probability estimate for K-nearest neighbor|
|US20060248054 *||Apr 29, 2005||Nov 2, 2006||Hewlett-Packard Development Company, L.P.||Providing training information for training a categorizer|
|US20070005538 *||Mar 9, 2006||Jan 4, 2007||Wisconsin Alumni Research Foundation||Lagrangian support vector machine|
|US20070038705 *||Jul 29, 2005||Feb 15, 2007||Microsoft Corporation||Trees of classifiers for detecting email spam|
|US20070078936 *||May 5, 2006||Apr 5, 2007||Daniel Quinlan||Detecting unwanted electronic mail messages based on probabilistic analysis of referenced resources|
|US20070079379 *||May 5, 2006||Apr 5, 2007||Craig Sprosts||Identifying threats in electronic messages|
|US20070094170 *||Feb 20, 2006||Apr 26, 2007||Nec Laboratories America, Inc.||Spread Kernel Support Vector Machine|
|US20070118904 *||Jan 9, 2007||May 24, 2007||Microsoft Corporation||Origination/destination features and lists for spam prevention|
|US20070159481 *||Jan 11, 2006||Jul 12, 2007||Naoki Abe||Method and apparatus for presenting feature importance in predictive modeling|
|US20070208856 *||May 2, 2007||Sep 6, 2007||Microsoft Corporation||Feedback loop for spam prevention|
|US20070220607 *||Dec 7, 2006||Sep 20, 2007||Craig Sprosts||Determining whether to quarantine a message|
|US20070260566 *||Apr 11, 2006||Nov 8, 2007||Urmanov Aleksey M||Reducing the size of a training set for classification|
|US20080010353 *||Jul 17, 2007||Jan 10, 2008||Microsoft Corporation||Adaptive junk message filtering system|
|US20080022177 *||Sep 28, 2007||Jan 24, 2008||Naoki Abe||Resource-Light Method and apparatus for Outlier Detection|
|US20080177680 *||Jan 19, 2007||Jul 24, 2008||Microsoft Corporation||Resilient classification of data|
|US20080177684 *||Jan 19, 2007||Jul 24, 2008||Microsoft Corporation||Combining resilient classifiers|
|US20090319508 *||Jun 24, 2008||Dec 24, 2009||Microsoft Corporation||Consistent phrase relevance measures|
|US20100088380 *||Jan 23, 2009||Apr 8, 2010||Microsoft Corporation||Message rendering for identification of content features|
|US20100169429 *||Dec 30, 2008||Jul 1, 2010||O'sullivan Patrick Joseph||Sharing email|
|US20100299662 *||May 20, 2009||Nov 25, 2010||Microsoft Corporation||Resource aware programming|
|US20120330978 *||Sep 11, 2012||Dec 27, 2012||Microsoft Corporation||Consistent phrase relevance measures|
|CN104798089A *||Nov 5, 2013||Jul 22, 2015||高通股份有限公司||Piecewise linear neuron modeling|
|U.S. Classification||706/25, 706/12|
|International Classification||G06N3/04, G06N3/08, G06K9/62|
|Cooperative Classification||G06K9/6281, G06N3/0472, G06N3/08|
|European Classification||G06K9/62C2M2, G06N3/08, G06N3/04P|
|Feb 1, 2000||AS||Assignment|
Owner name: MICROSOFT CORPORATION, WASHINGTON
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MEEK, CHRISTOPHER A.;PLATT, JOHN C.;REEL/FRAME:010580/0626;SIGNING DATES FROM 20000119 TO 20000120
|Aug 3, 2004||CC||Certificate of correction|
|Sep 28, 2004||CC||Certificate of correction|
|Mar 22, 2005||CC||Certificate of correction|
|Sep 17, 2007||FPAY||Fee payment|
Year of fee payment: 4
|Sep 14, 2011||FPAY||Fee payment|
Year of fee payment: 8
|Dec 9, 2014||AS||Assignment|
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034541/0001
Effective date: 20141014
|Oct 14, 2015||FPAY||Fee payment|
Year of fee payment: 12