WO1999052267A1 - Automated fraud management in transaction-based networks - Google Patents

Automated fraud management in transaction-based networks Download PDF

Info

Publication number
WO1999052267A1
WO1999052267A1 PCT/US1999/007441 US9907441W WO9952267A1 WO 1999052267 A1 WO1999052267 A1 WO 1999052267A1 US 9907441 W US9907441 W US 9907441W WO 9952267 A1 WO9952267 A1 WO 9952267A1
Authority
WO
WIPO (PCT)
Prior art keywords
fraud
call
recommendations
computer
implemented method
Prior art date
Application number
PCT/US1999/007441
Other languages
French (fr)
Inventor
Gerald Donald Baulier
Michael H. Cahill
Virginia Kay Ferrara
Diane Lambert
Original Assignee
Lucent Technologies Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lucent Technologies Inc. filed Critical Lucent Technologies Inc.
Priority to AU34704/99A priority Critical patent/AU3470499A/en
Priority to EP99916368A priority patent/EP1068719A1/en
Priority to CA002327680A priority patent/CA2327680A1/en
Priority to JP2000542905A priority patent/JP2002510942A/en
Priority to BR9909162-3A priority patent/BR9909162A/en
Publication of WO1999052267A1 publication Critical patent/WO1999052267A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M15/00Arrangements for metering, time-control or time indication ; Metering, charging or billing arrangements for voice wireline or wireless communications, e.g. VoIP
    • H04M15/47Fraud detection or prevention means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M15/00Arrangements for metering, time-control or time indication ; Metering, charging or billing arrangements for voice wireline or wireless communications, e.g. VoIP
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M15/00Arrangements for metering, time-control or time indication ; Metering, charging or billing arrangements for voice wireline or wireless communications, e.g. VoIP
    • H04M15/58Arrangements for metering, time-control or time indication ; Metering, charging or billing arrangements for voice wireline or wireless communications, e.g. VoIP based on statistics of usage or network monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/22Arrangements for supervision, monitoring or testing
    • H04M3/36Statistical metering, e.g. recording occasions when traffic exceeds capacity of trunks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/38Graded-service arrangements, i.e. some subscribers prevented from establishing certain connections
    • H04M3/382Graded-service arrangements, i.e. some subscribers prevented from establishing certain connections using authorisation codes or passwords
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/12Detection or prevention of fraud
    • H04W12/126Anti-theft arrangements, e.g. protection against subscriber identity module [SIM] cloning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/60Aspects of automatic or semi-automatic exchanges related to security aspects in telephonic communication systems
    • H04M2203/6027Fraud preventions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2215/00Metering arrangements; Time controlling arrangements; Time indicating arrangements
    • H04M2215/01Details of billing arrangements
    • H04M2215/0148Fraud detection or prevention means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2215/00Metering arrangements; Time controlling arrangements; Time indicating arrangements
    • H04M2215/01Details of billing arrangements
    • H04M2215/0188Network monitoring; statistics on usage on called/calling number
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/22Arrangements for supervision, monitoring or testing
    • H04M3/2218Call detail recording

Definitions

  • This invention relates generally to fraud management and, more specifically, to an automated approach for managing fraud in transaction-based networks, such as communication networks and the like.
  • BACKGROUND OF THE INVENTION Fraudulent use of communication networks is a problem of staggering proportions. Using telecommunications networks as an example, costs associated with fraud are estimated at billions of dollars a year and growing. Given the tremendous financial liability, the telecommunications industry continues to seek ways for reducing the occurrence of fraud while at the same time minimizing disruption of service to legitimate subscribers.
  • theft-of-service fraud may involve the illegitimate use of calling cards, cellular phones, or telephone lines (e.g., PBX lines), while subscription fraud may occur when a perpetrator who never intends to pay for a service poses as a new customer.
  • This latter type of fraud has been particularly difficult to detect and prevent because of the lack of any legitimate calling activity in the account that could otherwise be used as a basis for differentiating the fraudulent activity. In either case, losses attributable to these types of fraud are a significant problem.
  • thresholding typically involves aggregating traffic over time (e.g., days, weeks, months), establishing profiles for subscribers (e.g., calling patterns), and applying thresholds to identify fraud.
  • thresholding typically involves aggregating traffic over time (e.g., days, weeks, months), establishing profiles for subscribers (e.g., calling patterns), and applying thresholds to identify fraud.
  • a system may generate an alert to an investigator in a network monitoring or operations center. However, the alerts will generally not be examined or acted upon immediately, thereby resulting in a significant amount of latency in responding to the detected fraud.
  • Fraud losses in a communications network are substantially reduced according to the principles of the invention by automatically generating fraud management recommendations in response to suspected fraud and by deriving the recommendations as a function of selected attributes of the fraudulent activity, legitimate activity, and subscriber background information. More specifically, a programmable rules engine is used to automatically generate recommendations for responding to fraud based on call-by-call scoring so that the recommendations correspond directly to the type and amount of suspected fraudulent activity.
  • fraud management according to the principles of the invention is much more effective in meeting operational, financial, and customer satisfaction requirements as compared to prior arrangements where a case may sit in a queue until a human investigator analyzes it and makes a determination on what action to take, which typically shut down or suspend a customer's account until fraudulent activity can be investigated.
  • an automated fraud management system receives call detail records that have been scored to identify potentially fraudulent calls. Fraud scoring estimates the probability of fraud for each call based on the learned behavior of an individual subscriber and the learned behavior of fraud perpetrators. Importantly, scoring provides an indication of the contribution of various elements of the call detail record to the fraud score for that call. A case analysis is initiated and previously scored call detail records are separated into innocuous and suspicious groups based on fraud scores. Each group is then characterized according to selected variables and scoring for its member calls.
  • characterizations are combined with subscriber information to generate a set of decision variables.
  • a set of rules is then applied to determine if the current set of decision variables meets definable conditions.
  • prevention measures associated with that condition are recommended for the account.
  • recommended prevention measures may be implemented automatically via provisioning functions in the telecommunications network.
  • automated fraud management based on call-by-call scoring facilitates a continuous updating feature. For example, active cases can be re-evaluated as new calls are scored and added to a case. Moreover, a case may be updated as new recommendations are generated.
  • FIG. 1 is a simplified block diagram illustrating one embodiment of the invention for managing fraud in a telecommunications network
  • FIG. 2 is an exemplary listing of subscriber information that can be used according to the principles of the invention.
  • FIG. 3 is a simplified block diagram illustrating how call scoring is implemented according to one embodiment of the invention
  • FIG. 4 is a simplified flowchart of the case analysis process according to one illustrative embodiment of the invention.
  • FIG. 5 A is a simplified block diagram illustrating the step of summarizing case detail according to the embodiment shown in FIG. 4;
  • FIG. 5B is an exemplary listing of scored call variables that can be used according to the principles of the invention.
  • FIG. 6 is an exemplary listing of decision variables that can be used according to the principles of the invention
  • FIG. 7 is a simplified flowchart of the process for generating recommendations for responding to suspected fraudulent activity according to one illustrative embodiment of the invention.
  • FIG. 8 is an exemplary listing of prevention measures that can be implemented according to the principles of the invention. DETAILED DESCRIPTION OF THE INVENTION
  • FIG. 1 shows one illustrative embodiment of the invention for managing fraud in a typical telecommunications network. More specifically, system 100 is configured to perform various functions and operations to respond to suspected fraudulent activity in telecommunications network 150. As shown, system 100 comprises call scoring function 120, case analysis function 200, and provisioning function 300. To enable these functions, system 100 stores data including, but not limited to, scored call details 401, stored cases 402, and subscriber account information 403. It will be appreciated that system 100 can be implemented in one illustrative embodiment using computer hardware and software programmed to carry out these functions and operations, each of which is described below in further detail.
  • a telecommunications network such as network 150 generates call detail records for each call processed within the network.
  • these call detail records are supplied via path 151 to call scoring function 120 within system 100 so that each call can be scored to determine the likelihood of fraud for that particular call.
  • the resultant scored call details are stored as shown in block 401 for later use and are also forwarded to case analysis function 200 for processing.
  • case is meant to represent a potential fraud case that may be developing on a billed account, an originating line/equipment account, a terminating line/equipment account for the call, and the like.
  • case analysis function 200 receives scored call details as well as subscriber account information (block 403), examples of which could include the type of account (business, residential), customer's credit rating, customer's credit limit, past billing treatment indicators, date the account was established, and so on.
  • case details are stored as shown in block 402.
  • recommendations are automatically generated for responding to suspected fraud on an account.
  • These recommended fraud responses may, for example, include specific prevention measures that correspond to the type and amount of suspected fraudulent activity.
  • recommended fraud responses resulting from case analysis function 200 may include responses that can be implemented via provisioning function 300, which is coupled to network 150 via path 152.
  • provisioning function 300 which is coupled to network 150 via path 152.
  • Well known techniques may be used for provisioning network 150 to respond in a particular way to a particular activity on a call, e.g., block the call, disable call forwarding for this account, and so on.
  • FIG. 1 further illustrates the iterative and adaptive aspects of the invention with respect to call scoring and case analysis. More specifically, an already active case (e.g., stored in block 402) can be re-evaluated as new calls are scored and added to the case. A case may also be updated as new recommendations are generated as a result of case analysis. For example, call detail records are continually being supplied via path 151 to call scoring function 120. Newly scored calls can then be provided to case analysis function 200 along with previously scored calls stored as shown in block 401. Again, case analysis function 200 analyzes the scored call data in combination with subscriber information (block 403). The table in FIG. 2 shows a listing of some examples of subscriber account information that may be used in case analysis.
  • subscriber information block 403
  • case analysis function 200 may also retrieve an active case (e.g., previously stored in block 402) for further analysis in view of newly scored calls as well as subscriber information (block 403). New recommendations generated by case analysis function 200 may also be added to the already active case. As shown, provisioning measures (block 300) may be implemented as a result of new recommendations generated by case analysis function 200 or as a result of previously generated recommendations associated with a previously stored case (block 402). In this way, automated fraud management according to the principles of the invention allows for continuous updating. Referring to FIG. 3, a more detailed description is now provided for call scoring function 120 from FIG. 1.
  • call scoring function 120 supplies fraud score information for calls made in telecommunications network 150 so that appropriate recommendations can be generated for responding to suspected fraudulent activity. More specifically, call scoring function 120 can be implemented as further illustrated in the exemplary embodiment shown in FIG. 3. In general, scoring is based on subscriber behavior analysis wherein a signature (stored in block 1202) representative of a subscriber's calling pattern and a fraud signature (stored in block 1211) representative of a fraudulent calling pattern are used to determine the likelihood of fraud on a particular call. Scored call information is then stored (block 401) for later retrieval and use in the iterative and continuous updating process as well as forwarded for case analysis (200) as will be described below in more detail.
  • a signature stored in block 1202
  • a fraud signature stored in block 1211
  • call detail records are supplied from network 150 to call scoring function 120.
  • a subscriber's signature may be initialized as shown in block 1201 using scored call detail records from calls that have not been confirmed or suspected as fraudulent. Initialization may occur, for example, when a subscriber initially places one or more calls.
  • stored subscriber signatures from block 1202 can then be updated using newly scored call detail records from subsequent calls that are not confirmed or suspected as fraudulent. As such, a subscriber's signature can adapt to the subscriber's behavior over time. It should be noted that initialization of a subscriber's signature can also be based on predefined attributes of legitimate calling behavior which may be defined by historical call records and the like.
  • a subscriber signature may monitor many aspects of a subscriber's calling behavior including, but not limited to: calling rate, day of week timing, hour of day timing, call duration, method of billing, geography, and so on. Consequently, a signature may be derived from information that is typically contained within the call detail records, such as: originating number; terminating number; billed number; start time and date; originating location; carrier selection; call waiting indicators; call forwarding indicators; three-way calling/transfer indicators; operator assistance requests; and network security failure indicators, to name a few.
  • the particular elements to be used for establishing and updating a subscriber signature may depend on the type of network (e.g., wireline, wireless, calling card, non-telecommunications, etc.), the particular scoring method being used, as well as other factors that would be apparent to those skilled in the art.
  • each call will be scored depending on how the call compares to the subscriber's signature retrieved from block 1202 and how it compares to a fraud signature retrieved from block 1211.
  • fraud signatures can be initialized and updated (block 1210) using scored call detail records from confirmed or suspected fraudulent calls.
  • a high fraud score is generated if the call details represent a suspicious deviation from known behavior and a low fraud score is generated if the call details represent highly typical behavior for the subscriber account in question.
  • the relative contributions of various elements of the call to the fraud score should also be included, the use of which will be described in more detail below relating to case analysis.
  • contributions of the following elements may be included for subsequent case analysis: day of week; time of day; duration; time between consecutive calls; destination; use of call waiting; use of call forwarding; use of three-way calling; use of operator services; origination point; use of roaming services (wireless only); number of handoffs during call (wireless only); appearance of network security alert; carrier selection; and use of international completion services.
  • this listing is meant to be illustrative only and not limiting in any way.
  • call scoring is carried out on a customer-specific and call-by-call basis, a more precise fraud score can be obtained that is more indicative of the likelihood of fraud while reducing the amount of false alarms (i.e., "false positives").
  • a more precise fraud score can be obtained that is more indicative of the likelihood of fraud while reducing the amount of false alarms (i.e., "false positives").
  • a real-time processing platform is Lucent Technologies' QTMTM real-time transaction processing platform, which is described in an article by J. Baulier et al., "Sunrise: A Real-Time Event- Processing Framework" , Bell Labs Technical Journal, November 24, 1997, and which is herein incorporated by reference.
  • call scoring techniques may be suitable for implementing the functionality of call scoring function 120 as described above.
  • call scoring techniques based on statistical analysis, probabilistic scoring, memory-based reasoning, data mining, neural networking, and other methodologies are known and are contemplated for use in conjunction with the illustrative embodiments of the invention described herein. Some examples of these methods and techniques are described in Fawcett et al., "Adaptive Fraud Detection", Data Mining and Knowledge Discovery 1, 291-316 (1997) and U.S. Patent No. 5,819,226, "Fraud Detection Using Predictive Modeling", issued Oct. 6, 1998, each of which is herein incorporated by reference.
  • FIG. 4 shows one illustrative embodiment of case analysis function 200 from FIG. 1. As shown in step 201, details associated with a previously scored 10
  • call are reviewed to determine whether the call warrants the opening of a new fraud case or addition of the call to an existing case.
  • the fraud score generated by call scoring function 120 for a particular call and other predetermined variables, such as contributions of specific elements to the fraud score, are reviewed to determine whether the call is "interesting" from a fraud perspective.
  • a call may be "interesting" for any number of different reasons including, but not limited to: a fraud score that exceeds a predetermined (e.g., configurable) value; a fraud score that indicates the culmination of a change in score of a prescribed amount over a prescribed number of calls; an indication of an overlap in time with a previous call (i.e., a "collision”); an indication of a change in origination point between two calls that is impossible for one subscriber to make given the time between those calls (i.e., a "velocity violation”); or being a member of an existing case.
  • a fraud score that exceeds a predetermined (e.g., configurable) value
  • a fraud score that indicates the culmination of a change in score of a prescribed amount over a prescribed number of calls
  • an indication of an overlap in time with a previous call i.e., a "collision”
  • a scored call record is determined to be interesting, a check is made in step 202 to see if there is an existing case on the related account. If no case is found, a new case is created by: 1) retrieving, in step 203, background information on the subscriber account which is stored in the system (see block 403 in FIG. 1); 2) retrieving scored call detail for the account in step 204; and 3) summarizing the scored call detail in step 205.
  • the call detail records must first be categorized and then each category must be characterized according to predetermined variables. As shown in FIG.
  • call detail records are first categorized into a plurality of groups or sets, e.g., SET 1; SET 2 through SETN, which can be classified, for example, as innocuous, suspicious and indeterminate sets.
  • Initial categorization is based on fraud scores, wherein each call detail record is placed into one of the sets based on its fraud score as compared to established values or thresholds defining the categories. Automatic adjustments to this categorization can be made by considering other factors, such as originating location and dialed number to name a few.
  • variables 410 represent variables that pertain to the entire case.
  • the "FirstAlertAt” would be used to provide a time when the first high scoring call (e.g., suspected fraud) occurs for that case, regardless of the particular category the call is initially placed in.
  • “CaseScore” may be used to provide an overall case score for the case based on the individual call scores within the case, again regardless of particular category within the case.
  • the remaining variables shown in FIG. 5B are applicable, in this illustrative example, to a particular set within the case, e.g., the innocuous, suspicious, and indeterminate sets.
  • the explanations for each call summary variable are provided in the description field of the table.
  • the set- dependent call summary variables can be characterized into two types of variables. The first grouping of call summary variables 420, starting with "Number of Calls" through "Hot Number Count", all address a summing type operation in which a count or percentage is maintained for a particular element of the call.
  • call summary variable 421 (“Hot Number Count”) as an example, this value would represent the total number of calls within a given set in which the called number is a member of a predetermined (and selectable, editable, etc.) "Hot Number” list.
  • the remaining call summary variables 430 starting with “Day Score Dist” through “International Score Dist”, all address the contribution distribution of a specific element or elements to the fraud score within that set.
  • call summary variable 431 (“Hour Score Dist”) represents how the "hour of the day” in which calls in the set were placed influenced or contributed to the fraud score. It should be noted that the call summary variables listed in the table in FIG. 5B are only meant to be illustrative and not hmiting in any way.
  • Other call summary 12 (Hot Number Count)
  • variables may be selected to characterize a set, depending on several factors such as network type, transaction type, and so on.
  • step 202 if an existing case is found in step 202, then the case is subsequently retrieved in step 206 and the summary of the case, e.g., call summary variables from FIG. 5B, are updated with information from the current call.
  • the system calculates a set of decision variables as shown in step 208. More specifically, decision variables are used in the determination of whether certain conditions have been met, thereby providing the basis for generating recommendations for responding to suspected fraudulent activity in the network.
  • the table in FIG. 6 shows one exemplary list of decision variables that may be useful for case analysis according to the principles of the invention.
  • decision variable 440 is described as being any of the call summary variables from FIG. 5B or any manipulation of one or more of the call summary variables, such as by ratios, mathematical operations, and so on.
  • any of call summary variables 410, 420, or 430 may individually constitute a decision variable for determining an appropriate recommendation for responding to fraud.
  • Another example of a suitable decision variable could be the combination of two or more of the call summary variables in some predetermined manner, e.g., a ratio of the number of calls in which call forwarding was applied ("CF Count") to the total number of calls in the set ("Number of Calls").
  • the selection of applicable decision variables may again be dependent on the type of network, type of transactions, as well as other factors determined to be applicable.
  • Additional decision variables 450 can also be used to provide additional information that may be helpful in analyzing fraudulent activity to determine appropriate recommendations. For example, "AccountAge",
  • FIG. 7 shows one exemplary embodiment of the steps involved in generating recommendations according to the principles of the invention.
  • a rule is defined as including a "condition” and a list of one or more "measures".
  • a “condition” can be a Boolean expression that supports comparisons among decision variables (defined in FIG. 6) and predetermined values or constants. In one of its simplest forms, the Boolean expression may use standard Boolean operators, such as AND, OR, NOT, as well as precedence.
  • a single "measure” identifies an action (e.g., block services or block market), parameters associated with the action (e.g., call forwarding for the block services example, or Market 25 for the block market example), as well as a flag as to whether the measure should be carried out automatically.
  • rules can be modified by the system user (e.g., service provider) depending on given fraud management requirements for the network.
  • the system retrieves a list of rules, and processes each rule according to a hierarchy that may be simple, such as first to last, or by some predefined schema.
  • the condition for that rule e.g., CFcount/numcallsinset > 0.25
  • the condition for that rule e.g., CFcount/numcallsinset > 0.25
  • FOG. 6 applicable decision variables
  • step 2092 If the rule's condition is met, a measure associated with the particular rule is then retrieved in step 2093.
  • a measure associated with the particular rule is then retrieved in step 2093.
  • the retrieved measure is added to the list of desired measures in step 2095.
  • step 2096 determines whether there are more measures associated with the rule as shown in step 2096. If so, then steps 2093-2095 are repeated for all measures in the rule. If there are 14
  • step 2091 the system checks for other applicable rules in step 2097. If there are additional rules, then the process described above in steps 2091-2096 is repeated. Once there are no more applicable rules, then the system returns to step 210 in FIG. 4. Referring again to step 2092, if the rule's condition is not met, then the system examines whether there are more rules in step 2097 and, if so, then the process starts all over again with step 2091. If there are no more rules, then the actions associated with step 210 and subsequent steps in FIG. 4 are processed as described below. As a result of the processing that takes place in the steps illustrated in FIG.
  • a recommended measure is automatically generated to respond to suspected fraud. Examples of some actions found in recommended measures are shown in FIG. 8. For example, a recommended measure may be to block all account activity (where the action is "Block Account”), or to block only international dialing (where the action is "Block Dialing" and the associated parameter is "international”), or to block a particular type of service, e.g., call forwarding. It should be noted that this list of recommended actions in FIG. 8 is only meant to be illustrative and not limiting in any way.
  • the appropriate recommendation can be automatically generated as a function of call-by-call scoring, application of appropriate rules based on the scoring, selection of appropriate call summary variables and decision variables, and so on.
  • the automatically generated recommendations correspond to the call-by-call scoring process such that the recommendations are more precisely targeted to the specific type of fraud that is occurring on the account.
  • most fraud detection and prevention systems are only able to detect the presence of fraud, and only with varying levels of accuracy. Once fraud is detected, these systems typically refer the case for manual investigation.
  • Prevention measures if they exist at all, are not at all tailored to the type of suspected fraud.
  • the fraud management system according to the principles of the invention not only detects fraud but also collects information about the particular characteristics of that fraud. As a result, the 15
  • an appropriate recommended fraud response can be to shut down the call forwarding service on that account instead of shutting down all service on that account. In this way, fraud losses can be minimized or eliminated while maintaining service to the legitimate subscriber.
  • a recommendation to disable call forwarding may be carried out automatically using provisioning features within the network.
  • the recommendation or recommendations generated in step 209 are compared, in step 210, to recommendations that were previously given for the case. If the recommendations generated from step 209 are not new, then the call analysis process ends for that particular call. If the recommendations are new, then the case is updated with the new recommendations in step 211. If any of the new recommendations are of the type to be carried out automatically as determined in step 212, then appropriate implementation actions can be taken accordingly. For example, recommended actions can be implemented automatically via provisioning function 300 (FIG. 1) in the telecommunications network as previously described.
  • the automatic generation of recommendations according to the principles of the invention is predicated on a programmable rules-based engine (e.g., rules can be reprogrammed). Additionally, it is important to remember that the process steps described above in the context of FIGS. 1-8 can all be carried out on a call-by-call basis in the network. Consequently, the rule-based engine is an adaptive system that is used to develop a history of cases, decision criteria and final outcomes on a call-by-call basis in the network. As such, the fraud management system and method according to the principles of the invention provides service providers with a fraud management system which goes well beyond detection that can be customized according to user-defined policies, subscriber behaviors, and the like. 16
  • the present invention can be embodied in the form of methods and apparatuses for practicing those methods.
  • the invention can also be embodied in the form of program code embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
  • the present invention can also be embodied in the form of program code, for example, whether stored in a storage medium, loaded into and/or executed by a machine, or transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
  • program code When implemented on a general-purpose processor, the program code segments combine with the processor to provide a unique device that operates analogously to specific logic circuits.
  • any switches shown in the drawing may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software.
  • the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared.
  • a "processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, read-only memory (ROM) for storing software, random access memory (RAM), and non-volatile storage. Other hardware, conventional and/or custom, may also be included.
  • any switches shown in the drawing are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementor as more specifically understood from the context.
  • any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements which performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function.
  • the invention as defined by such claims resides in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. Applicants thus regard any means which can provide those functionalities as equivalent to those shown herein.

Abstract

Fraud losses in a communication network are substantially reduced by automatically generating fraud management recommendations in response to suspected fraud and by deriving the recommendations as a function of selected attributes of the fraudulent activity. More specifically, a programmable rules engine automatically generates recommendations based on call-by-call fraud scoring so that the recommendations correspond directly to the type and amount of suspected fraudulent activity. Using telecommunications fraud as an example, an automated fraud management system receives call detail records that have been previously scored to identify potentially fraudulent calls. Fraud scoring estimates the probability of fraud for each call based on the learned behavior of an individual subscriber as well as that of fraud perpetrators. Scoring also provides an indication of the contribution of various elements of the call detail record to the fraud score for that call. A case analysis is initiated and previously scored call detail records are separated into innocuous and suspicious groups based on fraud scores. Each group is then characterized according to selected variables and scoring for its member calls. These characterizations are combined with subscriber information to generate a set of decision variables. A set of rules is then applied to determine if the current set of decision variables meets definable conditions. When a condition is met, prevention measures associated with that condition are recommended for the account. As one example, recommended prevention measures may be automatically implemented via provisioning functions in the telecommunications network.

Description

AUTOMATED FRAUD MANAGEMENT IN TRANSACTION-BASED NETWORKS
CROSS-REFERENCE TO RELATED APPLICATIONS This application claims the benefit of U.S. Provisional Application Serial
No. 60/080,006 filed on April 3, 1998, which is herein incorporated by reference. This application is also related to U.S. Application Serial No. (Baulier 4-2-2-5), concurrently filed herewith, which is incorporated by reference herein. TECHNICAL FIELD
This invention relates generally to fraud management and, more specifically, to an automated approach for managing fraud in transaction-based networks, such as communication networks and the like. BACKGROUND OF THE INVENTION Fraudulent use of communication networks is a problem of staggering proportions. Using telecommunications networks as an example, costs associated with fraud are estimated at billions of dollars a year and growing. Given the tremendous financial liability, the telecommunications industry continues to seek ways for reducing the occurrence of fraud while at the same time minimizing disruption of service to legitimate subscribers.
Although there are many forms of telecommunications fraud, two of the most prevalent types or categories of fraud in today's networks are theft-of- service fraud and subscription fraud. For example, theft-of-service fraud may involve the illegitimate use of calling cards, cellular phones, or telephone lines (e.g., PBX lines), while subscription fraud may occur when a perpetrator who never intends to pay for a service poses as a new customer. This latter type of fraud has been particularly difficult to detect and prevent because of the lack of any legitimate calling activity in the account that could otherwise be used as a basis for differentiating the fraudulent activity. In either case, losses attributable to these types of fraud are a significant problem.
Many companies boast of superior fraud management in their product offerings; however, the fact remains that a comprehensive fraud management system does not exist which addresses the operational and economic concerns of service providers and customers alike. For example, a common disadvantage of most systems is that detection of fraud occurs after a substantial amount of fraudulent activity has already occurred on an account. Moreover, many fraud prevention measures implemented in today's systems are quite disruptive to the legitimate customer. As a result, customer "churn" may result as customers change service providers in search of a more secure system.
In general, the shortcomings of prior systems are readily apparent in terms of the amount of time that is required to detect and respond to fraud. For example, fraud detection based on customer feedback from monthly bills is not an acceptable approach to either service providers or customers. Automated fraud detection systems based on "thresholding" techniques are also not particularly helpful in managing fraud on a real-time or near real-time basis. For example, thresholding typically involves aggregating traffic over time (e.g., days, weeks, months), establishing profiles for subscribers (e.g., calling patterns), and applying thresholds to identify fraud. These systems are not viewed as being particularly effective because legitimate users can generate usage that exceeds the thresholds and the amount of fraud that can occur prior to detection and prevention is high (see. e.g., U.S. Patent No. 5,706,338, "Real-Time Communications Fraud Monitoring System" and U.S. Patent No. 5,627,886, "System and Method for Detecting Fraudulent Network Usage Patterns Using Real-Time Network Monitoring").
Although speed in detecting fraud may be improved by using technologies such as neural networking, statistical analysis, memory-based reasoning, genetic algorithms, and other data mining techniques, improved fraud detection alone does not completely solve the problem. In particular, even though systems incorporating these techniques may receive and process individual call data on a near real-time basis in an attempt to detect fraud, these systems still do not respond to the detected fraud on a real-time or near real-time basis. In one example, a system may generate an alert to an investigator in a network monitoring or operations center. However, the alerts will generally not be examined or acted upon immediately, thereby resulting in a significant amount of latency in responding to the detected fraud. Because of the reactive nature of these systems in responding to detected fraud, a considerable amount of financial loss is still incurred by service providers and customers after the alert is generated. Furthermore, automated prevention based on inaccurate detection will result in the disruption of service to legitimate subscribers. SUMMARY OF THE INVENTION
Fraud losses in a communications network are substantially reduced according to the principles of the invention by automatically generating fraud management recommendations in response to suspected fraud and by deriving the recommendations as a function of selected attributes of the fraudulent activity, legitimate activity, and subscriber background information. More specifically, a programmable rules engine is used to automatically generate recommendations for responding to fraud based on call-by-call scoring so that the recommendations correspond directly to the type and amount of suspected fraudulent activity. By automatically generating more precise fraud responses, fraud management according to the principles of the invention is much more effective in meeting operational, financial, and customer satisfaction requirements as compared to prior arrangements where a case may sit in a queue until a human investigator analyzes it and makes a determination on what action to take, which typically shut down or suspend a customer's account until fraudulent activity can be investigated. Automated fraud management according to the principles of the invention results in significant cost savings both in terms of reduced fraud losses as well as less resources required for investigating suspected fraud. Moreover, investigation time is reduced thus improving response time to suspected fraud. In one illustrative embodiment for managing telecommunications fraud, an automated fraud management system receives call detail records that have been scored to identify potentially fraudulent calls. Fraud scoring estimates the probability of fraud for each call based on the learned behavior of an individual subscriber and the learned behavior of fraud perpetrators. Importantly, scoring provides an indication of the contribution of various elements of the call detail record to the fraud score for that call. A case analysis is initiated and previously scored call detail records are separated into innocuous and suspicious groups based on fraud scores. Each group is then characterized according to selected variables and scoring for its member calls. These characterizations are combined with subscriber information to generate a set of decision variables. A set of rules is then applied to determine if the current set of decision variables meets definable conditions. When a condition is met, prevention measures associated with that condition are recommended for the account. As one example, recommended prevention measures may be implemented automatically via provisioning functions in the telecommunications network.
According to another aspect of the invention, automated fraud management based on call-by-call scoring facilitates a continuous updating feature. For example, active cases can be re-evaluated as new calls are scored and added to a case. Moreover, a case may be updated as new recommendations are generated.
BRIEF DESCRIPTION OF THE DRAWING A more complete understanding of the present invention may be obtained from consideration of the following detailed description of the invention in conjunction with the drawing, with like elements referenced with like reference numerals, in which:
FIG. 1 is a simplified block diagram illustrating one embodiment of the invention for managing fraud in a telecommunications network;
FIG. 2 is an exemplary listing of subscriber information that can be used according to the principles of the invention;
FIG. 3 is a simplified block diagram illustrating how call scoring is implemented according to one embodiment of the invention; FIG. 4 is a simplified flowchart of the case analysis process according to one illustrative embodiment of the invention;
FIG. 5 A is a simplified block diagram illustrating the step of summarizing case detail according to the embodiment shown in FIG. 4;
FIG. 5B is an exemplary listing of scored call variables that can be used according to the principles of the invention;
FIG. 6 is an exemplary listing of decision variables that can be used according to the principles of the invention; FIG. 7 is a simplified flowchart of the process for generating recommendations for responding to suspected fraudulent activity according to one illustrative embodiment of the invention; and
FIG. 8 is an exemplary listing of prevention measures that can be implemented according to the principles of the invention. DETAILED DESCRIPTION OF THE INVENTION
Although the illustrative embodiments described herein are particularly well-suited for managing fraud in a telecommunications network, and shall be described in this exemplary context, those skilled in the art will understand from the teachings herein that the principles of the invention may also be employed in other non-telecommunications transaction-based networks. For example, the principles of the invention may be applied in networks that support on-line credit card transactions, internet-based transactions, and the like. Consequently, references to "calls" and "call detail records" in a telecommunications example could be equated with "transactions" and "transaction records", respectively, in a non-telecommunications example, and so on. Accordingly, the embodiments shown and described herein are only meant to be illustrative and not limiting.
FIG. 1 shows one illustrative embodiment of the invention for managing fraud in a typical telecommunications network. More specifically, system 100 is configured to perform various functions and operations to respond to suspected fraudulent activity in telecommunications network 150. As shown, system 100 comprises call scoring function 120, case analysis function 200, and provisioning function 300. To enable these functions, system 100 stores data including, but not limited to, scored call details 401, stored cases 402, and subscriber account information 403. It will be appreciated that system 100 can be implemented in one illustrative embodiment using computer hardware and software programmed to carry out these functions and operations, each of which is described below in further detail.
As is well known, a telecommunications network such as network 150 generates call detail records for each call processed within the network.
According to the principles of the invention, these call detail records are supplied via path 151 to call scoring function 120 within system 100 so that each call can be scored to determine the likelihood of fraud for that particular call. The resultant scored call details are stored as shown in block 401 for later use and are also forwarded to case analysis function 200 for processing. As used herein, the term "case" is meant to represent a potential fraud case that may be developing on a billed account, an originating line/equipment account, a terminating line/equipment account for the call, and the like.
As shown, case analysis function 200 receives scored call details as well as subscriber account information (block 403), examples of which could include the type of account (business, residential), customer's credit rating, customer's credit limit, past billing treatment indicators, date the account was established, and so on. As a result of case analysis, case details are stored as shown in block 402. Additionally, recommendations are automatically generated for responding to suspected fraud on an account. These recommended fraud responses may, for example, include specific prevention measures that correspond to the type and amount of suspected fraudulent activity. As shown in the example of FIG. 1, recommended fraud responses resulting from case analysis function 200 may include responses that can be implemented via provisioning function 300, which is coupled to network 150 via path 152. Well known techniques may be used for provisioning network 150 to respond in a particular way to a particular activity on a call, e.g., block the call, disable call forwarding for this account, and so on.
FIG. 1 further illustrates the iterative and adaptive aspects of the invention with respect to call scoring and case analysis. More specifically, an already active case (e.g., stored in block 402) can be re-evaluated as new calls are scored and added to the case. A case may also be updated as new recommendations are generated as a result of case analysis. For example, call detail records are continually being supplied via path 151 to call scoring function 120. Newly scored calls can then be provided to case analysis function 200 along with previously scored calls stored as shown in block 401. Again, case analysis function 200 analyzes the scored call data in combination with subscriber information (block 403). The table in FIG. 2 shows a listing of some examples of subscriber account information that may be used in case analysis. However, these examples are meant to be illustrative only and not limiting in any way. Returning to FIG. 1, case analysis function 200 may also retrieve an active case (e.g., previously stored in block 402) for further analysis in view of newly scored calls as well as subscriber information (block 403). New recommendations generated by case analysis function 200 may also be added to the already active case. As shown, provisioning measures (block 300) may be implemented as a result of new recommendations generated by case analysis function 200 or as a result of previously generated recommendations associated with a previously stored case (block 402). In this way, automated fraud management according to the principles of the invention allows for continuous updating. Referring to FIG. 3, a more detailed description is now provided for call scoring function 120 from FIG. 1. As previously described, call scoring function 120 supplies fraud score information for calls made in telecommunications network 150 so that appropriate recommendations can be generated for responding to suspected fraudulent activity. More specifically, call scoring function 120 can be implemented as further illustrated in the exemplary embodiment shown in FIG. 3. In general, scoring is based on subscriber behavior analysis wherein a signature (stored in block 1202) representative of a subscriber's calling pattern and a fraud signature (stored in block 1211) representative of a fraudulent calling pattern are used to determine the likelihood of fraud on a particular call. Scored call information is then stored (block 401) for later retrieval and use in the iterative and continuous updating process as well as forwarded for case analysis (200) as will be described below in more detail.
As shown, call detail records are supplied from network 150 to call scoring function 120. A subscriber's signature may be initialized as shown in block 1201 using scored call detail records from calls that have not been confirmed or suspected as fraudulent. Initialization may occur, for example, when a subscriber initially places one or more calls. As further shown in block 1201, stored subscriber signatures from block 1202 can then be updated using newly scored call detail records from subsequent calls that are not confirmed or suspected as fraudulent. As such, a subscriber's signature can adapt to the subscriber's behavior over time. It should be noted that initialization of a subscriber's signature can also be based on predefined attributes of legitimate calling behavior which may be defined by historical call records and the like. In this way, subscription fraud can be detected more readily because a legitimate subscriber's signature, even at the very early stages of calling activity, can be correlated with the expected (or predicted) behavior of legitimate callers. As such, any immediate fraudulent calling behavior on a new account, for example, will not provide the sole basis for initializing the subscriber signature.
It should also be noted that a subscriber signature may monitor many aspects of a subscriber's calling behavior including, but not limited to: calling rate, day of week timing, hour of day timing, call duration, method of billing, geography, and so on. Consequently, a signature may be derived from information that is typically contained within the call detail records, such as: originating number; terminating number; billed number; start time and date; originating location; carrier selection; call waiting indicators; call forwarding indicators; three-way calling/transfer indicators; operator assistance requests; and network security failure indicators, to name a few. The particular elements to be used for establishing and updating a subscriber signature may depend on the type of network (e.g., wireline, wireless, calling card, non-telecommunications, etc.), the particular scoring method being used, as well as other factors that would be apparent to those skilled in the art.
Generally, each call will be scored depending on how the call compares to the subscriber's signature retrieved from block 1202 and how it compares to a fraud signature retrieved from block 1211. By way of example, fraud signatures can be initialized and updated (block 1210) using scored call detail records from confirmed or suspected fraudulent calls. In a simplified example, a high fraud score is generated if the call details represent a suspicious deviation from known behavior and a low fraud score is generated if the call details represent highly typical behavior for the subscriber account in question. In addition to providing an overall fraud score as output from call scoring function 120, the relative contributions of various elements of the call to the fraud score should also be included, the use of which will be described in more detail below relating to case analysis. For example, contributions of the following elements may be included for subsequent case analysis: day of week; time of day; duration; time between consecutive calls; destination; use of call waiting; use of call forwarding; use of three-way calling; use of operator services; origination point; use of roaming services (wireless only); number of handoffs during call (wireless only); appearance of network security alert; carrier selection; and use of international completion services. Again, this listing is meant to be illustrative only and not limiting in any way.
Because call scoring is carried out on a customer-specific and call-by-call basis, a more precise fraud score can be obtained that is more indicative of the likelihood of fraud while reducing the amount of false alarms (i.e., "false positives"). Furthermore, to accurately perform call scoring on a call-by-call basis, those skilled in the art will recognize that one suitable implementation would be to execute the above-described functions using a real-time processing platform. One such exemplary real-time processing platform is Lucent Technologies' QTM™ real-time transaction processing platform, which is described in an article by J. Baulier et al., "Sunrise: A Real-Time Event- Processing Framework" , Bell Labs Technical Journal, November 24, 1997, and which is herein incorporated by reference. It will be apparent to those skilled in the art that many different call scoring techniques may be suitable for implementing the functionality of call scoring function 120 as described above. In particular, call scoring techniques based on statistical analysis, probabilistic scoring, memory-based reasoning, data mining, neural networking, and other methodologies are known and are contemplated for use in conjunction with the illustrative embodiments of the invention described herein. Some examples of these methods and techniques are described in Fawcett et al., "Adaptive Fraud Detection", Data Mining and Knowledge Discovery 1, 291-316 (1997) and U.S. Patent No. 5,819,226, "Fraud Detection Using Predictive Modeling", issued Oct. 6, 1998, each of which is herein incorporated by reference.
FIG. 4 shows one illustrative embodiment of case analysis function 200 from FIG. 1. As shown in step 201, details associated with a previously scored 10
call are reviewed to determine whether the call warrants the opening of a new fraud case or addition of the call to an existing case. In particular, the fraud score generated by call scoring function 120 for a particular call and other predetermined variables, such as contributions of specific elements to the fraud score, are reviewed to determine whether the call is "interesting" from a fraud perspective. A call may be "interesting" for any number of different reasons including, but not limited to: a fraud score that exceeds a predetermined (e.g., configurable) value; a fraud score that indicates the culmination of a change in score of a prescribed amount over a prescribed number of calls; an indication of an overlap in time with a previous call (i.e., a "collision"); an indication of a change in origination point between two calls that is impossible for one subscriber to make given the time between those calls (i.e., a "velocity violation"); or being a member of an existing case.
If a scored call record is determined to be interesting, a check is made in step 202 to see if there is an existing case on the related account. If no case is found, a new case is created by: 1) retrieving, in step 203, background information on the subscriber account which is stored in the system (see block 403 in FIG. 1); 2) retrieving scored call detail for the account in step 204; and 3) summarizing the scored call detail in step 205. In order to summarize the scored detail in step 205, the call detail records must first be categorized and then each category must be characterized according to predetermined variables. As shown in FIG. 5 A, call detail records are first categorized into a plurality of groups or sets, e.g., SET1; SET2 through SETN, which can be classified, for example, as innocuous, suspicious and indeterminate sets. Initial categorization is based on fraud scores, wherein each call detail record is placed into one of the sets based on its fraud score as compared to established values or thresholds defining the categories. Automatic adjustments to this categorization can be made by considering other factors, such as originating location and dialed number to name a few. For example, if a strong majority of call detail records in the innocuous set contain a given highly typical originating location or dialed number, then one possible adjustment is to move all call records having the same attributes in the other sets to the innocuous set. The sets are then characterized by tabulating call 11
summary variables within each set. In particular, a number of call summary variables may be derived for the case and also for individual sets (e.g., innocuous, suspicious, indeterminate) within a case. The table in FIG. 5B shows one exemplary list of call summary variables that may be useful for case analysis. As shown, variables 410 ("FirstAlertAt" and "CaseScore") represent variables that pertain to the entire case. For example, The "FirstAlertAt" would be used to provide a time when the first high scoring call (e.g., suspected fraud) occurs for that case, regardless of the particular category the call is initially placed in. "CaseScore" may be used to provide an overall case score for the case based on the individual call scores within the case, again regardless of particular category within the case.
The remaining variables shown in FIG. 5B are applicable, in this illustrative example, to a particular set within the case, e.g., the innocuous, suspicious, and indeterminate sets. The explanations for each call summary variable are provided in the description field of the table. As shown, the set- dependent call summary variables can be characterized into two types of variables. The first grouping of call summary variables 420, starting with "Number of Calls" through "Hot Number Count", all address a summing type operation in which a count or percentage is maintained for a particular element of the call. Using call summary variable 421 ("Hot Number Count") as an example, this value would represent the total number of calls within a given set in which the called number is a member of a predetermined (and selectable, editable, etc.) "Hot Number" list. Those skilled in the art will readily understand the significance and use of "hot numbers". The remaining call summary variables 430, starting with "Day Score Dist" through "International Score Dist", all address the contribution distribution of a specific element or elements to the fraud score within that set. For example, call summary variable 431 ("Hour Score Dist") represents how the "hour of the day" in which calls in the set were placed influenced or contributed to the fraud score. It should be noted that the call summary variables listed in the table in FIG. 5B are only meant to be illustrative and not hmiting in any way. Other call summary 12
variables may be selected to characterize a set, depending on several factors such as network type, transaction type, and so on.
Referring again to FIG. 4, if an existing case is found in step 202, then the case is subsequently retrieved in step 206 and the summary of the case, e.g., call summary variables from FIG. 5B, are updated with information from the current call. Based on either a newly created summary (steps 203-205) or an updated summary (steps 206-207), the system calculates a set of decision variables as shown in step 208. More specifically, decision variables are used in the determination of whether certain conditions have been met, thereby providing the basis for generating recommendations for responding to suspected fraudulent activity in the network. The table in FIG. 6 shows one exemplary list of decision variables that may be useful for case analysis according to the principles of the invention.
As shown in FIG. 6, decision variable 440 is described as being any of the call summary variables from FIG. 5B or any manipulation of one or more of the call summary variables, such as by ratios, mathematical operations, and so on. For example, any of call summary variables 410, 420, or 430 may individually constitute a decision variable for determining an appropriate recommendation for responding to fraud. Another example of a suitable decision variable could be the combination of two or more of the call summary variables in some predetermined manner, e.g., a ratio of the number of calls in which call forwarding was applied ("CF Count") to the total number of calls in the set ("Number of Calls"). The selection of applicable decision variables may again be dependent on the type of network, type of transactions, as well as other factors determined to be applicable. Additional decision variables 450 can also be used to provide additional information that may be helpful in analyzing fraudulent activity to determine appropriate recommendations. For example, "AccountAge",
"PreviousFalseAlarms", "AccountType", "CreditRating", and "AlertCounts", each of which is described in the table shown in FIG. 6, may be used. It should be noted that the decision variables listed in the table in FIG. 6 are only meant to be illustrative and not limiting in any way. Other decision variables will be apparent 13
to those skilled in the art given the particular network type, transaction characteristics, and so on.
Referring again to FIG. 4, the system then generates, in step 209, one or more recommendations for responding to fraud that may be occurring on an account. FIG. 7 shows one exemplary embodiment of the steps involved in generating recommendations according to the principles of the invention.
A brief overview of the terminology will be helpful to understanding the steps shown in FIG. 7. As described herein, a rule is defined as including a "condition" and a list of one or more "measures". A "condition" can be a Boolean expression that supports comparisons among decision variables (defined in FIG. 6) and predetermined values or constants. In one of its simplest forms, the Boolean expression may use standard Boolean operators, such as AND, OR, NOT, as well as precedence. A single "measure" identifies an action (e.g., block services or block market), parameters associated with the action (e.g., call forwarding for the block services example, or Market 25 for the block market example), as well as a flag as to whether the measure should be carried out automatically. Generally, rules can be modified by the system user (e.g., service provider) depending on given fraud management requirements for the network.
Referring to step 2091 in FIG. 6, the system retrieves a list of rules, and processes each rule according to a hierarchy that may be simple, such as first to last, or by some predefined schema. The condition for that rule (e.g., CFcount/numcallsinset > 0.25) is then tested using the applicable decision variables (FIG. 6) specified for that condition. This is depicted in step 2092. If the rule's condition is met, a measure associated with the particular rule is then retrieved in step 2093. As shown in step 2094, if there has been no prior rule which calls for the same action (e.g., Block Services) as that identified in the retrieved measure (from step 2093), then the retrieved measure is added to the list of desired measures in step 2095. If the action has already been required by a previous rule, then the measure is ignored. In this way, precedence is established among rules in the case of conflicting directives. The next step is to determine whether there are more measures associated with the rule as shown in step 2096. If so, then steps 2093-2095 are repeated for all measures in the rule. If there are 14
no other measures associated with the particular rule (retrieved in step 2091), then the system checks for other applicable rules in step 2097. If there are additional rules, then the process described above in steps 2091-2096 is repeated. Once there are no more applicable rules, then the system returns to step 210 in FIG. 4. Referring again to step 2092, if the rule's condition is not met, then the system examines whether there are more rules in step 2097 and, if so, then the process starts all over again with step 2091. If there are no more rules, then the actions associated with step 210 and subsequent steps in FIG. 4 are processed as described below. As a result of the processing that takes place in the steps illustrated in FIG.
7, one or more recommended measures are automatically generated to respond to suspected fraud. Examples of some actions found in recommended measures are shown in FIG. 8. For example, a recommended measure may be to block all account activity (where the action is "Block Account"), or to block only international dialing (where the action is "Block Dialing" and the associated parameter is "international"), or to block a particular type of service, e.g., call forwarding. It should be noted that this list of recommended actions in FIG. 8 is only meant to be illustrative and not limiting in any way.
It is important to note that the appropriate recommendation can be automatically generated as a function of call-by-call scoring, application of appropriate rules based on the scoring, selection of appropriate call summary variables and decision variables, and so on. As such, the automatically generated recommendations correspond to the call-by-call scoring process such that the recommendations are more precisely targeted to the specific type of fraud that is occurring on the account. For example, most fraud detection and prevention systems are only able to detect the presence of fraud, and only with varying levels of accuracy. Once fraud is detected, these systems typically refer the case for manual investigation. Prevention measures, if they exist at all, are not at all tailored to the type of suspected fraud. By contrast, the fraud management system according to the principles of the invention not only detects fraud but also collects information about the particular characteristics of that fraud. As a result, the 15
recommended fraud responses are tailored to the specific type of fraud that is occurring.
As an example, if case analysis determines that the most significant contribution to a high fraud score is related to the use of call forwarding, then an appropriate recommended fraud response can be to shut down the call forwarding service on that account instead of shutting down all service on that account. In this way, fraud losses can be minimized or eliminated while maintaining service to the legitimate subscriber. Moreover, a recommendation to disable call forwarding may be carried out automatically using provisioning features within the network.
Returning to FIG. 4, the recommendation or recommendations generated in step 209 are compared, in step 210, to recommendations that were previously given for the case. If the recommendations generated from step 209 are not new, then the call analysis process ends for that particular call. If the recommendations are new, then the case is updated with the new recommendations in step 211. If any of the new recommendations are of the type to be carried out automatically as determined in step 212, then appropriate implementation actions can be taken accordingly. For example, recommended actions can be implemented automatically via provisioning function 300 (FIG. 1) in the telecommunications network as previously described.
In sum, the automatic generation of recommendations according to the principles of the invention is predicated on a programmable rules-based engine (e.g., rules can be reprogrammed). Additionally, it is important to remember that the process steps described above in the context of FIGS. 1-8 can all be carried out on a call-by-call basis in the network. Consequently, the rule-based engine is an adaptive system that is used to develop a history of cases, decision criteria and final outcomes on a call-by-call basis in the network. As such, the fraud management system and method according to the principles of the invention provides service providers with a fraud management system which goes well beyond detection that can be customized according to user-defined policies, subscriber behaviors, and the like. 16
As described herein, the present invention can be embodied in the form of methods and apparatuses for practicing those methods. The invention can also be embodied in the form of program code embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention. The present invention can also be embodied in the form of program code, for example, whether stored in a storage medium, loaded into and/or executed by a machine, or transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention. When implemented on a general-purpose processor, the program code segments combine with the processor to provide a unique device that operates analogously to specific logic circuits.
It should also be noted that the foregoing merely illustrates the principles of the invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements which, although not explicitly described or shown herein, embody the principles of the invention and are included within its spirit and scope. Furthermore, all examples and conditional language recited herein are principally intended expressly to be only for pedagogical purposes to aid the reader in understanding the principles of the invention and the concepts contributed by the inventor(s) to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.
Thus, for example, it will be appreciated by those skilled in the art that the block diagrams herein represent conceptual views of illustrative circuitry 17
embodying the principles of the invention. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudocode, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
The functions of the various elements shown in the drawing may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, a "processor" or "controller" should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, read-only memory (ROM) for storing software, random access memory (RAM), and non-volatile storage. Other hardware, conventional and/or custom, may also be included. Similarly, any switches shown in the drawing are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementor as more specifically understood from the context.
In the claims hereof any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements which performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function. The invention as defined by such claims resides in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. Applicants thus regard any means which can provide those functionalities as equivalent to those shown herein.

Claims

18WHAT IS CLAIMED IS;
1. A computer-implemented method for managing fraud in a telecommunications network, comprising the step of: automatically generating one or more recommendations for responding to suspected fraudulent activity in the telecommunications network, wherein the recommendations are derived as a function of calls scored for the likelihood of fraud, and wherein the recommendations correspond to attributes of the suspected fraudulent activity.
2. The computer-implemented method of claim 1, wherein scoring is done on a call-by-call basis.
3. The computer-implemented method of claim 2, further comprising the steps of: receiving call detail records which have been scored to identify potentially fraudulent activity, wherein a scored call detail record provides an indication of the contribution to the fraud score of a plurality of predetermined call variables; and initiating a case analysis based on predetermined criteria relating to changes in fraud scores.
4. The computer-implemented method of claim 3, wherein the step of initiating a case analysis comprises the steps of: a) separating a plurality of scored detail records, based on fraud scores, into at least a first group representative of non-suspicious activity and a second group representative of suspicious activity; b) characterizing each group according to predetermined criteria (variables) and fraud scores for individual calls in the respective groups; c) generating one or more decision variables based on step b) and subscriber information; d) applying one or more rules to the one or more decision variables to determine if a predefined condition is met; and 19
e) when a predefined condition is met, recommending one or more prescribed fraud responses corresponding to that condition.
5. The computer-implemented method of claim 4, wherein a fraud score of an individual call is representative of the likelihood of fraud based on the learned behavior of a subscriber comprising a subscriber signature and the learned behavior of fraudulent calling activity comprising a fraud signature.
6. The computer-implemented method of claim 5, wherein the one or more prescribed fraud responses includes prevention measures.
7. The computer-implemented method of claim 6, wherein one of the prevention measures includes implementing provisioning-based fraud prevention.
8. The computer-implemented method of claim 1, wherein the recommendations further correspond to attributes of legitimate activity.
9. The computer-implemented method of claim 8, wherein the recommendations further correspond to subscriber information and attributes associated with a case.
10. A computer-implemented method for managing fraud in a network where transactions occur, comprising the step of: automatically generating one or more recommendations for responding to suspected fraudulent activity in the network, wherein the recommendations are derived as a function of transactions scored for the likelihood of fraud, and wherein the recommendations correspond to selected attributes of the suspected fraudulent activity.
11. The computer-implemented method of claim 10, further comprising the steps of: receiving transaction records which have been scored to identify potentially fraudulent activity, wherein a scored transaction record provides an indication of the contribution to the fraud score of a plurality of predetermined transaction variables; and 20
initiating a case analysis based on predetermined criteria relating to changes in fraud scores.
12. The computer-implemented method of claim 11, wherein the step of initiating a case analysis comprises the steps of: a) separating a plurality of scored transaction records, based on fraud scores, into at least a first group representative of non-suspicious activity and a second group representative of suspicious activity; b) characterizing each group according to predetermined criteria (variables) and fraud scores for individual transactions in the respective groups; c) generating one or more decision variables based on step b) and subscriber information; d) applying one or more rules to the one or more decision variables to determine if a predefined condition is met; and e) when a predefined condition is met, recommending one or more prescribed fraud responses corresponding to that condition.
13. The computer-implemented method of claim 12, wherein a fraud score of an individual transaction is representative of the likelihood of fraud based on the learned behavior of a subscriber comprising a subscriber signature and the learned behavior of fraudulent activity comprising a fraud signature.
14. The computer-implemented method of claim 13, wherein the one or more prescribed fraud responses includes prevention measures.
15. The computer-implemented method of claim 14, wherein one of the prevention measures includes implementing provisioning-based fraud prevention in the network.
16. The computer-implemented method of claim 10, wherein scoring is done on a transaction-by-transaction basis. 21
17. The computer-implemented method of claim 10, wherein the recommendations further correspond to attributes of legitimate transaction activity.
18. The computer-implemented method of claim 17, wherein the recommendations further correspond to subscriber information.
19. The computer-implemented method of claim 18, wherein the recommendations further correspond to attributes associated with a case.
20. A system for managing fraud in a network where transactions occur, comprising: means for deriving one or more recommendations for responding to suspected fraudulent activity in the network as a function of transactions scored for the likelihood of fraud; and means for automatically generating the one or more recommendations, wherein the recommendations correspond to selected attributes of the suspected fraudulent activity.
21. A system for managing fraud in a telecommunications network where transactions occur, comprising: at least one memory device for receiving, storing, and supplying call detail records that have been scored to identify potentially fraudulent activity, wherein a scored call detail record provides an indication of the contribution to the fraud score of a plurality of predetermined call variables; and a computer processor, coupled to the at least one memory device, for executing programmed instructions to automatically generate one or more recommendations for responding to suspected fraudulent activity in the telecommunications network, wherein the recommendations are derived as a function of the scored call detail records, and wherein the recommendations correspond to selected attributes of the suspected fraudulent activity.
PCT/US1999/007441 1998-04-03 1999-04-05 Automated fraud management in transaction-based networks WO1999052267A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
AU34704/99A AU3470499A (en) 1998-04-03 1999-04-05 Automated fraud management in transaction-based networks
EP99916368A EP1068719A1 (en) 1998-04-03 1999-04-05 Automated fraud management in transaction-based networks
CA002327680A CA2327680A1 (en) 1998-04-03 1999-04-05 Automated fraud management in transaction-based networks
JP2000542905A JP2002510942A (en) 1998-04-03 1999-04-05 Automatic handling of fraudulent means in processing-based networks
BR9909162-3A BR9909162A (en) 1998-04-03 1999-04-05 Computer implemented method and system to control fraud in a telecommunications network

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US8000698P 1998-04-03 1998-04-03
US60/080,006 1998-04-03
US09/283,672 US6163604A (en) 1998-04-03 1999-04-01 Automated fraud management in transaction-based networks
US09/283,672 1999-04-01

Publications (1)

Publication Number Publication Date
WO1999052267A1 true WO1999052267A1 (en) 1999-10-14

Family

ID=26762715

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1999/007441 WO1999052267A1 (en) 1998-04-03 1999-04-05 Automated fraud management in transaction-based networks

Country Status (8)

Country Link
US (1) US6163604A (en)
EP (1) EP1068719A1 (en)
JP (2) JP2002510942A (en)
CN (1) CN1296694A (en)
AU (1) AU3470499A (en)
BR (1) BR9909162A (en)
CA (1) CA2327680A1 (en)
WO (1) WO1999052267A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2001251403B2 (en) * 2000-04-07 2006-11-09 Fidelity Information Services, Llc System and method for evaluating fraud suspects
US20070198361A1 (en) * 1998-12-04 2007-08-23 Digital River, Inc. Electronic commerce system and method for detecting fraud
US9817650B2 (en) 1998-12-04 2017-11-14 Digital River, Inc. Scheduling of a file download and search for updates

Families Citing this family (135)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070112512A1 (en) * 1987-09-28 2007-05-17 Verizon Corporate Services Group Inc. Methods and systems for locating source of computer-originated attack based on GPS equipped computing device
US7096192B1 (en) 1997-07-28 2006-08-22 Cybersource Corporation Method and system for detecting fraud in a credit card transaction over a computer network
US7403922B1 (en) 1997-07-28 2008-07-22 Cybersource Corporation Method and apparatus for evaluating fraud risk in an electronic commerce transaction
US6546545B1 (en) 1998-03-05 2003-04-08 American Management Systems, Inc. Versioning in a rules based decision management system
US8364578B1 (en) 1998-03-05 2013-01-29 Cgi Technologies And Solutions Inc. Simultaneous customer/account strategy execution in a decision management system
US6601034B1 (en) 1998-03-05 2003-07-29 American Management Systems, Inc. Decision management system which is cross-function, cross-industry and cross-platform
US6609120B1 (en) 1998-03-05 2003-08-19 American Management Systems, Inc. Decision management system which automatically searches for strategy components in a strategy
US6307926B1 (en) * 1998-05-20 2001-10-23 Sprint Communications Company, L.P. System for detection and prevention of telecommunications fraud prior to call connection
US6763098B1 (en) * 1998-06-01 2004-07-13 Mci Communications Corporation Call blocking based on call properties
EP1131976A1 (en) * 1998-11-18 2001-09-12 Lightbridge, Inc. Event manager for use in fraud detection
US6542729B1 (en) * 1999-04-27 2003-04-01 Qualcomm Inc. System and method for minimizing fraudulent usage of a mobile telephone
US7272855B1 (en) * 1999-06-08 2007-09-18 The Trustees Of Columbia University In The City Of New York Unified monitoring and detection of intrusion attacks in an electronic system
US7372485B1 (en) 1999-06-08 2008-05-13 Lightsurf Technologies, Inc. Digital camera device and methodology for distributed processing and wireless transmission of digital images
US8212893B2 (en) 1999-06-08 2012-07-03 Verisign, Inc. Digital camera device and methodology for distributed processing and wireless transmission of digital images
US7369161B2 (en) 1999-06-08 2008-05-06 Lightsurf Technologies, Inc. Digital camera device providing improved methodology for rapidly taking successive pictures
US7140039B1 (en) 1999-06-08 2006-11-21 The Trustees Of Columbia University In The City Of New York Identification of an attacker in an electronic system
US7013296B1 (en) 1999-06-08 2006-03-14 The Trustees Of Columbia University In The City Of New York Using electronic security value units to control access to a resource
US7277863B1 (en) * 2000-06-13 2007-10-02 I2 Technologies Us, Inc. Electronic marketplace communication system
US20030204426A1 (en) * 1999-06-18 2003-10-30 American Management Systems, Inc. Decision management system which searches for strategy components
US6437812B1 (en) * 1999-06-30 2002-08-20 Cerebrus Solutions Limited Graphical user interface and method for displaying hierarchically structured information
US6708155B1 (en) * 1999-07-07 2004-03-16 American Management Systems, Inc. Decision management system with automated strategy optimization
US7103357B2 (en) 1999-11-05 2006-09-05 Lightsurf Technologies, Inc. Media spooler system and methodology providing efficient transmission of media content from wireless devices
US6876991B1 (en) 1999-11-08 2005-04-05 Collaborative Decision Platforms, Llc. System, method and computer program product for a collaborative decision platform
US6404871B1 (en) 1999-12-16 2002-06-11 Mci Worldcom, Inc. Termination number screening
US6396915B1 (en) 1999-12-17 2002-05-28 Worldcom, Inc. Country to domestic call intercept process (CIP)
US6335971B1 (en) 1999-12-17 2002-01-01 Mci Worldcom, Inc. Country to country call intercept process
US6404865B1 (en) * 1999-12-17 2002-06-11 Worldcom, Inc. Domestic to country call intercept process (CIP)
US20070150330A1 (en) * 1999-12-30 2007-06-28 Mcgoveran David O Rules-based method and system for managing emergent and dynamic processes
US7847833B2 (en) 2001-02-07 2010-12-07 Verisign, Inc. Digital camera device providing improved methodology for rapidly taking successive pictures
EP1132797A3 (en) * 2000-03-08 2005-11-23 Aurora Wireless Technologies, Ltd. Method for securing user identification in on-line transaction systems
WO2001073652A1 (en) * 2000-03-24 2001-10-04 Access Business Group International Llc System and method for detecting fraudulent transactions
US6570968B1 (en) * 2000-05-22 2003-05-27 Worldcom, Inc. Alert suppression in a telecommunications fraud control system
US7610216B1 (en) * 2000-07-13 2009-10-27 Ebay Inc. Method and system for detecting fraud
US6850606B2 (en) * 2001-09-25 2005-02-01 Fair Isaac Corporation Self-learning real-time prioritization of telecommunication fraud control actions
US6597775B2 (en) * 2000-09-29 2003-07-22 Fair Isaac Corporation Self-learning real-time prioritization of telecommunication fraud control actions
AU2002228700A1 (en) * 2000-11-02 2002-05-15 Cybersource Corporation Method and apparatus for evaluating fraud risk in an electronic commerce transaction
GB0029229D0 (en) * 2000-11-30 2001-01-17 Unisys Corp Counter measures for irregularities in financial transactions
CA2332255A1 (en) * 2001-01-24 2002-07-24 James A. Cole Automated mortgage fraud detection system and method
US7046831B2 (en) * 2001-03-09 2006-05-16 Tomotherapy Incorporated System and method for fusion-aligned reprojection of incomplete data
US7305354B2 (en) 2001-03-20 2007-12-04 Lightsurf,Technologies, Inc. Media asset management system
US7433710B2 (en) * 2001-04-20 2008-10-07 Lightsurf Technologies, Inc. System and methodology for automated provisioning of new user accounts
US20020161711A1 (en) * 2001-04-30 2002-10-31 Sartor Karalyn K. Fraud detection method
US7865427B2 (en) 2001-05-30 2011-01-04 Cybersource Corporation Method and apparatus for evaluating fraud risk in an electronic commerce transaction
US7313545B2 (en) * 2001-09-07 2007-12-25 First Data Corporation System and method for detecting fraudulent calls
US6782371B2 (en) * 2001-09-20 2004-08-24 Ge Financial Assurance Holdings, Inc. System and method for monitoring irregular sales activity
US6636592B2 (en) * 2001-09-28 2003-10-21 Dean C. Marchand Method and system for using bad billed number records to prevent fraud in a telecommunication system
US7149296B2 (en) * 2001-12-17 2006-12-12 International Business Machines Corporation Providing account usage fraud protection
US7724281B2 (en) 2002-02-04 2010-05-25 Syniverse Icx Corporation Device facilitating efficient transfer of digital content from media capture device
US20030233331A1 (en) * 2002-06-18 2003-12-18 Timothy Laudenbach Online credit card security method
US7051040B2 (en) 2002-07-23 2006-05-23 Lightsurf Technologies, Inc. Imaging system providing dynamic viewport layering
US20040064401A1 (en) * 2002-09-27 2004-04-01 Capital One Financial Corporation Systems and methods for detecting fraudulent information
US20110202565A1 (en) * 2002-12-31 2011-08-18 American Express Travel Related Services Company, Inc. Method and system for implementing and managing an enterprise identity management for distributed security in a computer system
US7143095B2 (en) * 2002-12-31 2006-11-28 American Express Travel Related Services Company, Inc. Method and system for implementing and managing an enterprise identity management for distributed security
US7774842B2 (en) * 2003-05-15 2010-08-10 Verizon Business Global Llc Method and system for prioritizing cases for fraud detection
US7817791B2 (en) * 2003-05-15 2010-10-19 Verizon Business Global Llc Method and apparatus for providing fraud detection using hot or cold originating attributes
US7783019B2 (en) * 2003-05-15 2010-08-24 Verizon Business Global Llc Method and apparatus for providing fraud detection using geographically differentiated connection duration thresholds
US7971237B2 (en) * 2003-05-15 2011-06-28 Verizon Business Global Llc Method and system for providing fraud detection for remote access services
US9412123B2 (en) 2003-07-01 2016-08-09 The 41St Parameter, Inc. Keystroke analysis
US20050125280A1 (en) * 2003-12-05 2005-06-09 Hewlett-Packard Development Company, L.P. Real-time aggregation and scoring in an information handling system
US20050192895A1 (en) * 2004-02-10 2005-09-01 First Data Corporation Methods and systems for processing transactions
US20050192897A1 (en) * 2004-02-10 2005-09-01 First Data Corporation Methods and systems for payment-network enrollment
US10999298B2 (en) 2004-03-02 2021-05-04 The 41St Parameter, Inc. Method and system for identifying users and detecting fraud by use of the internet
IL161217A (en) * 2004-04-01 2013-03-24 Cvidya 2010 Ltd Detection of outliers in communication networks
US7313575B2 (en) * 2004-06-14 2007-12-25 Hewlett-Packard Development Company, L.P. Data services handler
US20050286686A1 (en) * 2004-06-28 2005-12-29 Zlatko Krstulich Activity monitoring systems and methods
US8631493B2 (en) * 2004-08-12 2014-01-14 Verizon Patent And Licensing Inc. Geographical intrusion mapping system using telecommunication billing and inventory systems
US8572734B2 (en) 2004-08-12 2013-10-29 Verizon Patent And Licensing Inc. Geographical intrusion response prioritization mapping through authentication and flight data correlation
US8418246B2 (en) * 2004-08-12 2013-04-09 Verizon Patent And Licensing Inc. Geographical threat response prioritization mapping system and methods of use
US8091130B1 (en) 2004-08-12 2012-01-03 Verizon Corporate Services Group Inc. Geographical intrusion response prioritization mapping system
US8082506B1 (en) * 2004-08-12 2011-12-20 Verizon Corporate Services Group Inc. Geographical vulnerability mitigation response mapping system
US20060041464A1 (en) * 2004-08-19 2006-02-23 Transunion Llc. System and method for developing an analytic fraud model
US7653742B1 (en) 2004-09-28 2010-01-26 Entrust, Inc. Defining and detecting network application business activities
US20060095963A1 (en) * 2004-10-29 2006-05-04 Simon Crosby Collaborative attack detection in networks
US7802722B1 (en) * 2004-12-31 2010-09-28 Teradata Us, Inc. Techniques for managing fraud information
US7783745B1 (en) 2005-06-27 2010-08-24 Entrust, Inc. Defining and monitoring business rhythms associated with performance of web-enabled business processes
US8082349B1 (en) * 2005-10-21 2011-12-20 Entrust, Inc. Fraud protection using business process-based customer intent analysis
US8175939B2 (en) * 2005-10-28 2012-05-08 Microsoft Corporation Merchant powered click-to-call method
US7760861B1 (en) * 2005-10-31 2010-07-20 At&T Intellectual Property Ii, L.P. Method and apparatus for monitoring service usage in a communications network
EP1949576A2 (en) * 2005-11-15 2008-07-30 SanDisk IL Ltd Method for call-theft detection
US8938671B2 (en) 2005-12-16 2015-01-20 The 41St Parameter, Inc. Methods and apparatus for securely displaying digital images
US11301585B2 (en) 2005-12-16 2022-04-12 The 41St Parameter, Inc. Methods and apparatus for securely displaying digital images
US10127554B2 (en) * 2006-02-15 2018-11-13 Citibank, N.A. Fraud early warning system and method
US20070204033A1 (en) * 2006-02-24 2007-08-30 James Bookbinder Methods and systems to detect abuse of network services
US8151327B2 (en) 2006-03-31 2012-04-03 The 41St Parameter, Inc. Systems and methods for detection of session tampering and fraud prevention
US8411833B2 (en) * 2006-10-03 2013-04-02 Microsoft Corporation Call abuse prevention for pay-per-call services
US9008617B2 (en) * 2006-12-28 2015-04-14 Verizon Patent And Licensing Inc. Layered graphical event mapping
US10769290B2 (en) * 2007-05-11 2020-09-08 Fair Isaac Corporation Systems and methods for fraud detection via interactive link analysis
WO2008141168A1 (en) * 2007-05-11 2008-11-20 Fair Isaac Corporation Systems and methods for fraud detection via interactive link analysis
US20090106151A1 (en) * 2007-10-17 2009-04-23 Mark Allen Nelsen Fraud prevention based on risk assessment rule
US8255318B2 (en) * 2007-10-18 2012-08-28 First Data Corporation Applicant authentication
US7979894B2 (en) * 2008-01-08 2011-07-12 First Data Corporation Electronic verification service systems and methods
US10510025B2 (en) * 2008-02-29 2019-12-17 Fair Isaac Corporation Adaptive fraud detection
US8135388B1 (en) * 2009-01-07 2012-03-13 Sprint Communications Company L.P. Managing communication network capacity
US20100235908A1 (en) * 2009-03-13 2010-09-16 Silver Tail Systems System and Method for Detection of a Change in Behavior in the Use of a Website Through Vector Analysis
US20100235909A1 (en) * 2009-03-13 2010-09-16 Silver Tail Systems System and Method for Detection of a Change in Behavior in the Use of a Website Through Vector Velocity Analysis
US9112850B1 (en) 2009-03-25 2015-08-18 The 41St Parameter, Inc. Systems and methods of sharing information through a tag-based consortium
JP2011023903A (en) * 2009-07-15 2011-02-03 Fujitsu Ltd Abnormality detector of communication terminal, and abnormality detection method of communication terminal
US8543522B2 (en) 2010-04-21 2013-09-24 Retail Decisions, Inc. Automatic rule discovery from large-scale datasets to detect payment card fraud using classifiers
US9800721B2 (en) 2010-09-07 2017-10-24 Securus Technologies, Inc. Multi-party conversation analyzer and logger
WO2012054646A2 (en) 2010-10-19 2012-04-26 The 41St Parameter, Inc. Variable risk engine
US10754913B2 (en) 2011-11-15 2020-08-25 Tapad, Inc. System and method for analyzing user device information
US9633201B1 (en) 2012-03-01 2017-04-25 The 41St Parameter, Inc. Methods and systems for fraud containment
US9521551B2 (en) 2012-03-22 2016-12-13 The 41St Parameter, Inc. Methods and systems for persistent cross-application mobile device identification
WO2014022813A1 (en) 2012-08-02 2014-02-06 The 41St Parameter, Inc. Systems and methods for accessing records via derivative locators
WO2014078569A1 (en) 2012-11-14 2014-05-22 The 41St Parameter, Inc. Systems and methods of global identification
US9426302B2 (en) * 2013-06-20 2016-08-23 Vonage Business Inc. System and method for non-disruptive mitigation of VOIP fraud
US10902327B1 (en) 2013-08-30 2021-01-26 The 41St Parameter, Inc. System and method for device identification and uniqueness
US20160012544A1 (en) * 2014-05-28 2016-01-14 Sridevi Ramaswamy Insurance claim validation and anomaly detection based on modus operandi analysis
US10091312B1 (en) 2014-10-14 2018-10-02 The 41St Parameter, Inc. Data structures for intelligently resolving deterministic and probabilistic device identifiers to device profiles and/or groups
US10111102B2 (en) * 2014-11-21 2018-10-23 Marchex, Inc. Identifying call characteristics to detect fraudulent call activity and take corrective action without using recording, transcription or caller ID
US9922048B1 (en) 2014-12-01 2018-03-20 Securus Technologies, Inc. Automated background check via facial recognition
CN104572615A (en) * 2014-12-19 2015-04-29 深圳中创华安科技有限公司 Method and system for on-line case investigation processing
US9641680B1 (en) * 2015-04-21 2017-05-02 Eric Wold Cross-linking call metadata
US9729727B1 (en) * 2016-11-18 2017-08-08 Ibasis, Inc. Fraud detection on a communication network
CN106779675A (en) * 2016-11-22 2017-05-31 国家计算机网络与信息安全管理中心山东分中心 A kind of Mobile banking's safety of payment method for monitoring and analyzing and system
US9774726B1 (en) * 2016-12-22 2017-09-26 Microsoft Technology Licensing, Llc Detecting and preventing fraud and abuse in real time
GB2563947B (en) 2017-06-30 2020-01-01 Resilient Plc Fraud Detection System
US10694026B2 (en) * 2017-08-16 2020-06-23 Royal Bank Of Canada Systems and methods for early fraud detection
US10855666B2 (en) 2018-06-01 2020-12-01 Bank Of America Corporation Alternate user communication handling based on user identification
US10785214B2 (en) 2018-06-01 2020-09-22 Bank Of America Corporation Alternate user communication routing for a one-time credential
US10798126B2 (en) 2018-06-01 2020-10-06 Bank Of America Corporation Alternate display generation based on user identification
US10785220B2 (en) 2018-06-01 2020-09-22 Bank Of America Corporation Alternate user communication routing
US10972472B2 (en) 2018-06-01 2021-04-06 Bank Of America Corporation Alternate user communication routing utilizing a unique user identification
EP3813347B1 (en) * 2018-06-20 2023-02-08 KT Corporation Apparatus and method for detecting illegal call
US11184481B1 (en) 2018-08-07 2021-11-23 First Orion Corp. Call screening service for communication devices
US10484532B1 (en) * 2018-10-23 2019-11-19 Capital One Services, Llc System and method detecting fraud using machine-learning and recorded voice clips
US11164206B2 (en) * 2018-11-16 2021-11-02 Comenity Llc Automatically aggregating, evaluating, and providing a contextually relevant offer
GB2580325B (en) * 2018-12-28 2023-09-06 Resilient Plc Fraud detection system
US10951770B2 (en) * 2019-04-16 2021-03-16 Verizon Patent And Licensing Inc. Systems and methods for utilizing machine learning to detect and determine whether call forwarding is authorized
US11531780B2 (en) * 2019-05-15 2022-12-20 International Business Machines Corporation Deep learning-based identity fraud detection
US11153435B2 (en) 2019-09-24 2021-10-19 Joseph D. Grabowski Method and system for automatically blocking robocalls
US10805458B1 (en) 2019-09-24 2020-10-13 Joseph D. Grabowski Method and system for automatically blocking recorded robocalls
US11483428B2 (en) 2019-09-24 2022-10-25 Joseph D. Grabowski Method and system for automatically detecting and blocking robocalls
US11050879B1 (en) * 2019-12-31 2021-06-29 First Orion Corp. Call traffic data monitoring and management
US11343381B2 (en) 2020-07-16 2022-05-24 T-Mobile Usa, Inc. Identifying and resolving telecommunication network problems using network experience score

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0618713A2 (en) * 1993-03-31 1994-10-05 AT&T Corp. Real-time fraud monitoring system in a telecommunication network
EP0653868A2 (en) * 1993-11-12 1995-05-17 AT&T Corp. Resource access control system
EP0661863A2 (en) * 1993-12-29 1995-07-05 AT&T Corp. Security system for terminating fraudulent telephone calls
GB2303275A (en) * 1995-07-13 1997-02-12 Northern Telecom Ltd Detecting mobile telephone misuse
WO1997037486A1 (en) * 1996-03-29 1997-10-09 British Telecommunications Public Limited Company Fraud monitoring in a telecommunications network
WO1999005844A1 (en) * 1997-07-22 1999-02-04 British Telecommunications Public Limited Company Fraud monitoring system

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4799255A (en) * 1987-01-30 1989-01-17 American Telephone And Telegraph Company - At&T Information Systems Communication facilities access control arrangement
US5375244A (en) * 1992-05-29 1994-12-20 At&T Corp. System and method for granting access to a resource
US5357564A (en) * 1992-08-12 1994-10-18 At&T Bell Laboratories Intelligent call screening in a virtual communications network
US5819226A (en) * 1992-09-08 1998-10-06 Hnc Software Inc. Fraud detection using predictive modeling
US5345595A (en) * 1992-11-12 1994-09-06 Coral Systems, Inc. Apparatus and method for detecting fraudulent telecommunication activity
US5506893A (en) * 1993-02-19 1996-04-09 At&T Corp. Telecommunication network arrangement for providing real time access to call records
US5602906A (en) * 1993-04-30 1997-02-11 Sprint Communications Company L.P. Toll fraud detection system
US5448760A (en) * 1993-06-08 1995-09-05 Corsair Communications, Inc. Cellular telephone anti-fraud system
US5566234A (en) * 1993-08-16 1996-10-15 Mci Communications Corporation Method for controlling fraudulent telephone calls
US5504810A (en) * 1993-09-22 1996-04-02 At&T Corp. Telecommunications fraud detection scheme
US5627886A (en) * 1994-09-22 1997-05-06 Electronic Data Systems Corporation System and method for detecting fraudulent network usage patterns using real-time network monitoring
US5768354A (en) * 1995-02-02 1998-06-16 Mci Communications Corporation Fraud evaluation and reporting system and method thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0618713A2 (en) * 1993-03-31 1994-10-05 AT&T Corp. Real-time fraud monitoring system in a telecommunication network
EP0653868A2 (en) * 1993-11-12 1995-05-17 AT&T Corp. Resource access control system
EP0661863A2 (en) * 1993-12-29 1995-07-05 AT&T Corp. Security system for terminating fraudulent telephone calls
GB2303275A (en) * 1995-07-13 1997-02-12 Northern Telecom Ltd Detecting mobile telephone misuse
WO1997037486A1 (en) * 1996-03-29 1997-10-09 British Telecommunications Public Limited Company Fraud monitoring in a telecommunications network
WO1999005844A1 (en) * 1997-07-22 1999-02-04 British Telecommunications Public Limited Company Fraud monitoring system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MOREAU Y ET AL: "Detection of mobile phone fraud using supervised neural networks: a first prototype", ARTIFICIAL NEURAL NETWORKS - ICANN '97. PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE, 8 October 1997 (1997-10-08) - 10 October 1997 (1997-10-10), Lausanne (CH), pages 1065 - 1070, XP002109506, ISBN: 3-540-63631-5 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070198361A1 (en) * 1998-12-04 2007-08-23 Digital River, Inc. Electronic commerce system and method for detecting fraud
US8271396B2 (en) * 1998-12-04 2012-09-18 Digital River, Inc. Electronic commerce system and method for detecting fraud
US9817650B2 (en) 1998-12-04 2017-11-14 Digital River, Inc. Scheduling of a file download and search for updates
AU2001251403B2 (en) * 2000-04-07 2006-11-09 Fidelity Information Services, Llc System and method for evaluating fraud suspects
US8775284B1 (en) 2000-04-07 2014-07-08 Vectorsgi, Inc. System and method for evaluating fraud suspects

Also Published As

Publication number Publication date
BR9909162A (en) 2002-02-05
EP1068719A1 (en) 2001-01-17
JP2002510942A (en) 2002-04-09
AU3470499A (en) 1999-10-25
US6163604A (en) 2000-12-19
CA2327680A1 (en) 1999-10-14
JP2009022042A (en) 2009-01-29
CN1296694A (en) 2001-05-23

Similar Documents

Publication Publication Date Title
US6163604A (en) Automated fraud management in transaction-based networks
US6157707A (en) Automated and selective intervention in transaction-based networks
US7698182B2 (en) Optimizing profitability in business transactions
US6836540B2 (en) Systems and methods for offering a service to a party associated with a blocked call
Estévez et al. Subscription fraud prevention in telecommunications using fuzzy rules and neural networks
US8583527B2 (en) System and method for independently authorizing auxiliary communication services
US20130336169A1 (en) Real-Time Fraudulent Traffic Security for Telecommunication Systems
EP1318655B1 (en) A method for detecting fraudulent calls in telecommunication networks using DNA
US20060269050A1 (en) Adaptive fraud management systems and methods for telecommunications
Arafat et al. Detection of wangiri telecommunication fraud using ensemble learning
US6636592B2 (en) Method and system for using bad billed number records to prevent fraud in a telecommunication system
Panigrahi et al. Use of dempster-shafer theory and Bayesian inferencing for fraud detection in mobile communication networks
CN115564449A (en) Risk control method and device for transaction account and electronic equipment
US6590967B1 (en) Variable length called number screening
MXPA00009409A (en) Automated fraud management in transaction-based networks
Ritika Fraud detection and management for telecommunication systems using artificial intelligence (AI)
US8068590B1 (en) Optimizing profitability in business transactions
CA3143760A1 (en) Systems and methods for use in blocking of robocall and scam call phone numbers
Masrub et al. SIM Boxing Problem: ALMADAR ALJADID Case Study
Campus A Revi D T
Estévez Valencia et al. Subscription fraud prevention in telecommunications using fuzzy rules and neural networks
Gurunadham Identifying Telecommunication Deception using Neural Networks through Data Mining

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 99804523.3

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 1999916368

Country of ref document: EP

AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: PA/a/2000/009409

Country of ref document: MX

ENP Entry into the national phase

Ref document number: 2327680

Country of ref document: CA

Ref document number: 2327680

Country of ref document: CA

Kind code of ref document: A

Ref document number: 2000 542905

Country of ref document: JP

Kind code of ref document: A

WWP Wipo information: published in national office

Ref document number: 1999916368

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642