WO2004090659A2 - Optimizing active decision making using simulated decision making - Google Patents

Optimizing active decision making using simulated decision making Download PDF

Info

Publication number
WO2004090659A2
WO2004090659A2 PCT/IB2004/050401 IB2004050401W WO2004090659A2 WO 2004090659 A2 WO2004090659 A2 WO 2004090659A2 IB 2004050401 W IB2004050401 W IB 2004050401W WO 2004090659 A2 WO2004090659 A2 WO 2004090659A2
Authority
WO
WIPO (PCT)
Prior art keywords
decision
simulation
decision making
simulation model
making process
Prior art date
Application number
PCT/IB2004/050401
Other languages
French (fr)
Other versions
WO2004090659A3 (en
Inventor
Mukesh Dalal
Armand Prieditis
Original Assignee
Mukesh Dalal
Armand Prieditis
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mukesh Dalal, Armand Prieditis filed Critical Mukesh Dalal
Priority to US10/552,645 priority Critical patent/US20060200333A1/en
Publication of WO2004090659A2 publication Critical patent/WO2004090659A2/en
Publication of WO2004090659A3 publication Critical patent/WO2004090659A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/08Probabilistic or stochastic CAD

Definitions

  • TECHNICAL FIELD This invention relates in general to the field of decision making, and more particularly, to the integration of simulated and active decision making.
  • Nodes in this tree represent states or situations of the chessboard; directed arcs represent moves that result in a new state.
  • the arcs emanating from the root node represent the first player's move possibilities; the ones from the level below that, the second player's move possibilities; alternate layers represent the alternating moves between the two players. If moving the queen is guaranteed to lead to a winning outcome and the rest of the moves are not, then that is the move of choice.
  • the figure shows that, after visiting the left subtree, the root's value with be less than or equal to three; if the heuristic underestimate at the left subtree is five, then that entire subtree can be pruned according to the branch-and-bound principle.
  • MDP Markov Decision Processes
  • A is the set of actions (decisions). An action in our manufacturing environment is to commit a particular resource to a particular task. An action can also be a vector of parallel actions, one for each resource.
  • p(t ⁇ a,s) is the probability of state t given action a in state s. This is the transition function that reflects the outcome of applying an action to a state. For example, it can capture that a part might be produced with a certain defect with a certain probability.
  • o G(s) is a goal predicate that is true when a terminal state is reached. In a make-to-order manufacturing environment, G(s) is true when all orders have been fulfilled. In a make-to- stock environment, this might capture stochastic (forecasted) demand over a given time interval.
  • a dispatch rule is a fixed rule used to rapidly make processing or transfer decisions. Examples include:
  • Dispatch rules have several problems. First, they are myopic: they don't take into account the future impact of their decisions. Any fixed, finite rule can capture only a finite portion of the manufacturing complexity. As a result, dispatch rules are notorious for making non-optimal decisions. Second, they do not take advantage of additional decision-making time that might be possible to improve decision-making quality (say through lookahead). The traditional "control- oriented" view is that a fixed dispatch rule is determined ahead of time, programmed into a controller, and executed without further deliberation. Third, most dispatch rules do not take into account the particular target goal state — they are applied blindly.
  • FIG. 5 shows our system architecture for decision-making on the factory-floor.
  • the factory model which consists of a model of the effects of each action and their likelihood, is used by a set of decision-making agents to make a decision. This decision takes effect on the factory floor and the results of this decision are analyzed by the learning system, which in turn modifies the parameters of the factory model.
  • the figure also shows interfaces to the factory floor environment, which consists of database information about inventory, resource availability, calendars, suppliers, shift information and customer orders. Such functions are typically handled by vendors from MRP, ERP, Supply-Chain Management, and Order Management.
  • FIG. 6 shows a flexible manufacturing system.
  • FIG. 7 shows how each agent makes a decision for a particular state. Each agent generates a list of possible actions for itself.
  • FIG. 9 shows an example of a knowledge representation structure for the stochastic policy.
  • the particular structure is that of a Bayesian Network.
  • other structures such as neural nets or decision trees could be used to represent that policy.
  • FIG. 12 shows the interleaving of execution and lookahead modes in Oasys.
  • SRDM Simulation-Based Real-Time Decision- Making
  • Machine A choose either Machine C or D.
  • the decision alternatives are Left (A, C, and D, respectively, for the three decisions) or Right (B, D, and E, respectively).
  • the route segments as well as the queues in front of each machine are FIFO (first-in-first-out).
  • the operational objective (KPI) is to minimize the average lead time, that is, the average time a part spends in the system (from arrival to departure). In this illustration, the arrival and processing times are exponentially distributed - FIG. 6 also shows the corresponding means (in Minutes). The travel time between each pair of nodes is fixed to 2 Minutes.
  • the decision maker keeps on applying the simulated actions of all decision makers (including itself) and sampling new actions until the lookahead depth (terminal) is reached. It repeats this several times and returns, as outcome, the average of the utility of all the terminals.
  • the utility is the sum of the utility observed until reaching the terminal and the heuristics value of the terminal.
  • SRDM uses the simulation model to generate the required number of depth-restricted look-ahead simulations for each alternative.
  • the KPIs from these look-aheads are averaged and the decision with the best aggregated KPI is chosen.
  • SRDM starts with depth 0, where the fixed policy completely determines the decision.
  • SRDM keeps incrementing the depth until the available time runs out or the depth limit is reached. Finally, it chooses the decision based on the last depth for which all the look-aheads were successfully completed.
  • a more sophisticated version of SRDM interleaves both the depth and width increments to provide decisions with a desired statistical confidence level.
  • Deterministic local linear At Dl, choose left if the expression "xQ(A) + yQ(B) + z" is greater than 50, where Q(M) is the length of the queue for a machine M and x,y,z are the optimized using an offline procedure like OptQuest, , a commercial simulation optimization software.
  • the function p(a ⁇ s) is represented as a Bayesian Network and the ML task simplifies to updating the parameters the BN's conditional.
  • the updates to the parameters are updates to frequencies of certain events in the data.
  • stochastic policy need not be represented by a Bayesian Network.
  • Other 0 methods such as neural networks, polynomial functions, or decision trees could be used.
  • the learning approach has several advantages. First, it reduces the cost of model building as the same approach can be used to learn the transition-function p(t ⁇ a,s) from data. Second, it reduces the cost of model validation since the ML principle is probabilistically sound and thus self- validating. Third, the lookahead model itself can act as a 5 "smart" simulator — one that takes into account decisions by each agent, obviating the development of a separate simulation model for testing. Decision-making can be rigorously evaluated without disrupting the actual application.
  • decision-making engine As our decision-making engine is domain-independent and highly modular, it has the potential to be applied to other complex decision-making tasks such as network routing, medical decision-making, transportation routing, political decision-making, investment planning and military planning. It can also be applied to multiple agents — where parallel action sets a are assumed to be input. Finally, it can be applied in a real-time setting: rather than searching to terminals, it is possible to search to a fixed depth and return a heuristic estimate of the remaining utility instead. The depth begins at 0 and as long as there is decision-making time, the depth is incremented by 1. The action of choice is the best (i.e. lowest utility) action associated with the last completed depth. Integration with a simulation system
  • this invention enhances an existing simulation system.
  • SLX is general purpose software for developing discrete event simulations.
  • SLX provides an essential set of simulation mechanisms like event scheduling, generalized wait-until, random variable generation, and statistics collection.
  • SLX also provides basic building blocks, such as queues and facilities, for modeling a wide variety of systems at varying levels of detail.
  • SLX provides no built-in functionality for real-time synchronization or decision-making.
  • Our enhancement, Oasys for SLX enhances SLX in the following ways:
  • Oasys enhances process specifications by allowing non-deterministic actions - these contentions are also resolved during simulation such that the performance measure is optimized. • Oasys continually learns so as to keep improving the optimisation over time.
  • Monitors Systems for providing feedback from the execution systems. These systems sense the actual change in the real world. Sensors are needed because an external event may have taken place in the world or the actions might not have had their intended effect.
  • Oasys uses SLX to perform a look-ahead with a finite number of alternatives. Each look-ahead involves running one or more simulations for some period of time, determining the performance values at the end of each of those simulations, and combining those values to obtain one set of performance values. Oasys then chooses the best alternative that may get communicated to the user. The user may choose to accept or override this recommendation - the final decision may then get communicated to the external control systems.
  • Oasys alternates between the following two modes, as shown in FIG. 12: • Execution mode: Oasys interacts with external execution systems and users.
  • control model Instead of interacting with external execution systems and users, the control model interacts with execution and user models, respectively.
  • Oasys complies with that decision and continues until the next set of recommendations.
  • Forgo selection Oasys will wait for a specified amount of time, and then select its recommended decision by default.
  • the decision points are defined by portions of the simulation where multiple outcomes are under our control.
  • the execution-relevant state is explicitly saved before lookahead and restored whenever needed.
  • the nodes of the look-ahead tree represent portions of the simulation that are deterministic.
  • the children of a node represent the result of alternative events.
  • Each node is processed (possibly several times) in one of the following ways:
  • a new child is generated: If this is the first node, a new child is generated according to the next decision we wish to try, otherwise a child is generated according to a probability distribution and processing passes to the child. Thus, the following could be specified by the designer:
  • Execution system model To anticipate the effects of observations made by the execution system.
  • Performance forecast To predict the performance measure when a new node is encountered (before any look-ahead for that node).
  • LookAhead(s, D, C, N) is calculated by the following steps, where U(a) define the distribution Ul(a), U2(a) ...,Un(a) for each action a with mean S(a) and standard deviation D(a) and let M(a) be the actual mean being estimated by the sample mean S(a): 1.
  • U(a) define the distribution Ul(a), U2(a) ...,Un(a) for each action a with mean S(a) and standard deviation D(a) and let M(a) be the actual mean being estimated by the sample mean S(a): 1.
  • Set n l
  • Another approach is to use a stochastic policy to generate the initial probabilities for various alternatives of a decision, and then use look-ahead to refine these probabilities, until some termination criterion is satisfied.
  • look-ahead strategy can be specified using the following:
  • a faster abstract model may be used.
  • This model implemented in a general purpose programming language like C++, may be learned using observations made on the real execution system or its simulator. If there are multiple concurrent decisions to be made, one could construct a dependency graph among them, based on whether a decision impacts another. Except for the cycles in this graph, the rest of the decisions may be then serialized. For multiple inter-dependent decisions, several approaches may be used:
  • Belief Networks with Hierarchical Variables An important method of dealing with complexity is to place things in hierarchy. For example, the animal kingdom's taxonomy makes it easy for scientists to understand where each organism is located in that kingdom. Zip codes, which are hierarchically coded for each region, make it easier for the post office to distribute mail. The usual way of dealing with such hierarchies in a Belief Network is to use an "excess" encoding that represents non-sensical combinations as a zero probability.
  • This method of hierarchical variable can also be extended to clustering, where the hierarchies are given, but the parameters are not known. Standard learning methods such as EM can be used to fit the hierarchies to data.
  • Belief Networks have long used such data types to make it simpler for programmers to develop their applications without having to specify functions that operate on those types or the details of how those functions operate.
  • the motivation for use in Belief Networks is similar: after specifying the type of each variable, the user can specify how such a variable changes with each decision using the built-in functions and tests that operate on those types. For example, in manufacturing, the user often requires queues to describe a factory process and the built-in type Queue makes it simple for the user to specify a queue. 3. Higher-Order Belief Networks with Differentials Sometimes one would like to specify the effects of a decision or a policy in terms of information from previous states. The stochastic policy can be generalized to include information from previous states.
  • Belief Networks with User-Defined Functions To declare a function that a user can reuse, the user must specify certain information as in any modern programming language: parameters, local variables, and other functions within their scope. All of these can be referred to in the Bayes Network for the definition of a function. Once defined, functions can be re-used just as any built-in function. This is an important way for the user to extend the set of built-in functions to suit a particular application and to facilitate re-use.
  • the function definition involves the specification of a Belief Network possibly with multiple nodes, but with a single output node representing the result of the function.
  • the other nodes can be a combination of parameters, local variables, or variables global in scope to that function. Each node can reference other functions or recursively reference the current function.
  • the way this change in state takes place can include, but is not limited to: user-specified operations on the Belief Network and multiple sampling of the belief network according the transition function. In the case of sampling, the weights of the network are adjusted for each prior combination of variables, for each child-node and parent nodes.
  • This compact representation of a belief vector makes allows the solution of POMDPs of greater complexity than before. Moreover, it generalizes approaches such as Kalman Filtering.

Abstract

A method and a computer implemented system (Fig. 5) for improving an active decision making process by using a simulation model of the decision making process. The simulation model is used to evaluate the impact of alternative decisions at a choice point, in order to select one alternative. The method or system may be integrated with an external system, like a manufacturing execution system. The simulation, or may contain a Bayesian network.

Description

OPTIMIZING ACTIVE DECISION MAKING USING SIMULATED
DECISION MAKING
TECHNICAL FIELD This invention relates in general to the field of decision making, and more particularly, to the integration of simulated and active decision making.
BACKGROUND ART
Decision making requires choosing among several alternatives. For a decision-making agent, this might involve selecting a specific action from several alternative actions that are possible at any given point of time. Active decision making involves repeating this selection of an appropriate action in real-time at subsequent points of time.
According to decision-theory, we should always make the decision that maximizes our future utility, where utility is some measure such as profit or loss, pain or pleasure, or time. To make a real-time decision, we need to elaborate possible future decision sequences and choose that immediate decision that results in the highest utility. For example, in chess, we might be considering five possible moves and for each of those five moves, we might have to consider five responses from our opponent, and for each of those five responses we might have to consider five responses from us...and so on, until we reach the end of the game, where the outcome is either a win, loss, or draw. If one of those moves is guaranteed to lead to a winning outcome, then that move is the move of choice. FIG. 1 shows a portion of the lookahead tree built with this strategy. Nodes in this tree represent states or situations of the chessboard; directed arcs represent moves that result in a new state. The arcs emanating from the root node represent the first player's move possibilities; the ones from the level below that, the second player's move possibilities; alternate layers represent the alternating moves between the two players. If moving the queen is guaranteed to lead to a winning outcome and the rest of the moves are not, then that is the move of choice.
In general, it is not feasible to compute the actual outcome for a given position (except for those near the end) in real time because the full lookahead tree is too large to search. As a result, most chess programs look ahead to a limited horizon, and at this horizon they return a heuristic estimate of the final outcome. A positive heuristic estimate might signify a desirable state; a negative outcome, an undesirable state; and a zero outcome, a neutral state. In particular, IBM's Deep Blue program used a weighted combination of material, position, king's safety and tempo for its heuristic function. For example, the material portion might score each pawn 1, each bishop 3, each knight 4, each rook 5, and the queen 9. Using such a lookahead, Deep Blue managed to defeat Gary Kasparov, the world champion human chess player. Branch-and-bound pruning is a typical approach to further restrict the lookahead tree. It allows pruning a path to a state if it can be proved that that outcome at that state will not affect the value of an ancestor. For example, FIG. 2 shows a lookahead tree where the object is to compute the minimum value over the tree. If the heuristic is guaranteed to be a lower-bound of the final backed-up value, then we can use that property to prune an entire subtree, thus increasing the efficiency of lookahead. The figure shows that, after visiting the left subtree, the root's value with be less than or equal to three; if the heuristic underestimate at the left subtree is five, then that entire subtree can be pruned according to the branch-and-bound principle.
FIG. 3 shows why branch-and-bound fails when uncertainty is present. In this FIG., bundled arcs represent uncertainty. For example, the root's right child is a single decision that has two possible outcomes, one with probability 0.2 and the other with probability 0.8. This might represent a machine producing a faulty part with probability 0.2 and a non-faulty one with probability 0.8. The final backed-up value at this probabilistic child is the weighted sum of the two sibling outcomes, where the weighting is according to the probabilities. Given that the current pruning bound at the root is 3 (derived from the left child), the pruning bound at the child with the 0.2 arc becomes 3/0.2 = 15, which is an increase in the pruning bound of the parent. Since this increase will happen at any node with uncertainty below it, the pruning bound grows exponentially large and therefore becomes useless for pruning. This means that little or no pruning is possible when uncertainty is involved. As a result, is not feasible to produce deep lookahead trees in problems involving uncertainty unless alternate lookahead search methods are developed. Worse still, standard lookahead fails when probability distribution is continuous (e.g. the processing time for machine might be normally distributed) because the number of children is infinite.
The traditional approach to model uncertainty relies on Markov Decision Processes (MDPs). An MDP consists of a tuple <S,A,p, G> :
• S is a set of states that represent situations in the particular world. For example, it might represent the set of possible buffer values and the state of each machine (busy, idling, producing a certain part). Just as in chess, the state space is explicitly built through the application of actions and only that portion of the state space necessary to make a decision is enumerated. States in manufacturing encode the time and in-process actions.
• A is the set of actions (decisions). An action in our manufacturing environment is to commit a particular resource to a particular task. An action can also be a vector of parallel actions, one for each resource. • p(t\a,s) is the probability of state t given action a in state s. This is the transition function that reflects the outcome of applying an action to a state. For example, it can capture that a part might be produced with a certain defect with a certain probability. o G(s) is a goal predicate that is true when a terminal state is reached. In a make-to-order manufacturing environment, G(s) is true when all orders have been fulfilled. In a make-to- stock environment, this might capture stochastic (forecasted) demand over a given time interval.
• u(s) is the utility of that goal state. We plan on using profit/loss as the utility. We assume that the state encodes any profit or loss incurred along the path to the goal state — this simplifies the presentation of the obj ective function below.
ise
Figure imgf000004_0001
Using this framework, it is possible to define an objective function: Bellman introduced a form of this function in 1957, and others have since elaborated it in fields of Operations Research and Artificial Intelligence. According to decision theory, we want to choose that action a that maximizes pfy | a,s)f{t) for state s. The function/if) is l≡S computed by lookahead.
In contrast to an artificial application like chess, the complexity of real-world applications makes lookahead more challenging. Since generating and applying actions in real-world applications typically take much more time than that in a chess game, deep lookahead with branch-and-bound pruning in not practical, even without uncertainty. In other words, the problem with real-world MDPs is that they cannot be efficiently computed for deep lookahead. Coming from a different background, US Patent 5,764,953 issued on June 9, 1998 to Collins et. al. describes an integration of active and simulated decision making processes. However, the purpose of that integration was to reduce the cost and errors of simulation, that is, the active decision making process was used to improve the simulated decision making process. Our integration of the two processes is for the opposite purpose - simulated decision making process is used to improve the active decision making process. Because of this reversal of purpose, our integration is also technically very different from their integration.
Real-time decision-making in real-world manufacturing applications is currently dominated by dispatch rules. A dispatch rule is a fixed rule used to rapidly make processing or transfer decisions. Examples include:
• Kanban: produce a part only if required by a downstream machine. • CONWIP: maintain a constant set of items in each buffer.
• First-come, first-served.
• Choose the shortest route to get to a destination.
Dispatch rules have several problems. First, they are myopic: they don't take into account the future impact of their decisions. Any fixed, finite rule can capture only a finite portion of the manufacturing complexity. As a result, dispatch rules are notorious for making non-optimal decisions. Second, they do not take advantage of additional decision-making time that might be possible to improve decision-making quality (say through lookahead). The traditional "control- oriented" view is that a fixed dispatch rule is determined ahead of time, programmed into a controller, and executed without further deliberation. Third, most dispatch rules do not take into account the particular target goal state — they are applied blindly.
DISCLOSURE OF INVENTION
In view of the shortcomings of existing lookahead techniques for active decision making, this invention is directed to a new lookahead process for active decision making that integrates a simulated decision making process.
A preferred embodiment of our invention uses a new type of objective function, one that
computes the expected outcome: f(s) e
Figure imgf000005_0001
In this embodiment, the decisions are made according to the same rule as before: choose that action a that maximizes ∑p(t\a,s)f(t) for state s; where f(t) is computed with the above teS expectation-based function rather than the Bellman-style function. Instead of the actual expected outcome, its estimate could be used by sampling some of the actions a and states s.
The function p(a\s) is called a stochastic policy, which is the probability of action a given state s. This policy guides the decision-maker by appropriately weighting the outcome of each branch during the computation of the above objective function. For example, FIG. 5 shows that the expected value is 4.75 at the root. For multiple decision-making agents, this function defines & stochastic coordination policy, which describes how all agents are likely to behave.
We have developed a sampling apparatus to compute this function efficiently. This apparatus is based on the Monte Carlo principle of simulation: produce samples according to the underlying probability distribution: in our case, to repeatedly sample paths to terminals where each choice point is chosen according randomly from the distribution defined by p(a\s) and p(t\a,s); return the average value over multiple samples. Clearly, this sampling apparatus will compute the above expectation-based function as the number of samples gets large. This sampling approach has several advantages. First and most important, it focuses the search only on those portions of the lookahead tree that are likely to occur. This makes it computationally efficient. Second, it can be used to make real-time decisions where deliberation time can be traded against accuracy: the more samples the more accurate the result in terms of difference with the expectation-based function. Finally, it can be sped up by parallelism: multiple machines can compute different samples in parallel.
The major advantage of the expectation-based approach is that an agent can take into account how the other agents are likely to behave rather than how they optimally behave. For example, in computer chess, the usual assumption is that the opponent will play optimally against us. This assumption makes chess programs play conservatively because they assume a perfect opponent. It might be possible to improve performance if we played to the opponent's likely moves rather than optimal moves.
In general, the stochastic policy becomes an integral part of the simulation decision making model of the application. Each run of this simulation model generates a new branch of the lookahead tree.
Our approach has several advantages over dispatch rules. First, it is situation specific. It is computationally simpler to make a decision for a specific state and target production goal through lookahead than it is to learn a general dispatch rule for all states and all goals. This is because lookahead only elaborates that portion of the lookahead tree necessary to make an immediate decision. Producing a full schedule of future events is much more expensive. Second, it is not necessary to build a separate simulation model or to halt production in order to test the lookahead-based approach. The lookahead model itself functions as a simulator — a smart one that includes future decisions. Third, it is possible to learn the parameters of the model from factory-floor data, thus reducing the cost of deploying our system. Finally, our approach scales with parallelism: we can distribute the decision making to multiple agents, each representing a resource.
Additional features and advantages of the invention will be set forth in part in the description that follows, and in part will be apparent from the description, or may be learned by practice of the invention. The advantages of the invention will be realized and attained by the system particularly pointed out in the written description and claims hereof, as well as in the accompanying drawings. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only, and not restrictive of the invention, as claimed.
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the invention and together with the description serve to explain the principles of the invention.
BRIEF DESCRIPTION OF DRAWINGS
FIG. 1 shows a portion of the lookahead tree built for chess. Nodes in this tree represent states or situations of the chessboard; directed arcs represent moves that result in a new state. The arcs emanating from the root node represent the first player's move possibilities; the ones from the level below that, the second player's move possibilities; alternate layers represent the alternating moves between the two players. If moving the queen is guaranteed to lead to a winning outcome and the rest of the moves are not, then that is the move of choice. FIG. 2 shows how branch-and-bound pruning works when no uncertainty is present. A node can be pruned as long as it can be proved not to affect the value of an ancestor. In this case, the lower-bound property of the heuristic is used to prune the node.
FIG. 3 shows that branch-and-bound pruning fails when uncertainty is present. The reason is that the pruning bound grows exponentially with each level of uncertainty. FIG. 4 shows how the expected value is computed at the root.
FIG. 5 shows our system architecture for decision-making on the factory-floor. The factory model, which consists of a model of the effects of each action and their likelihood, is used by a set of decision-making agents to make a decision. This decision takes effect on the factory floor and the results of this decision are analyzed by the learning system, which in turn modifies the parameters of the factory model. The figure also shows interfaces to the factory floor environment, which consists of database information about inventory, resource availability, calendars, suppliers, shift information and customer orders. Such functions are typically handled by vendors from MRP, ERP, Supply-Chain Management, and Order Management. FIG. 6 shows a flexible manufacturing system. FIG. 7 shows how each agent makes a decision for a particular state. Each agent generates a list of possible actions for itself. It then samples the other possible actions of other agents according to the stochastic policy p(a\s). For each of these samples, it computes the lookahead. Next, each agent chooses that action that is associated with the lowest average outcome (where average is derived from the aforementioned sampling process). Finally, each agent applies the action. The resulting state becomes the new state and the cycle continues.
FIG. 8 describes the lookahead method. If the state is a terminal then the terminal's value is returned. This value represents the utility for that state. For example, the utility might include the path cost plus a heuristic estimate. Or, it might be path cost, plus the final outcome value. If a terminal is not reached, then the set of actions is sampled according to p(a\s), across all decision- makers and the resulting action set is applied by computing the next state. From this step, the loop continues to the first step (the terminal check).
FIG. 9 shows an example of a knowledge representation structure for the stochastic policy. The particular structure is that of a Bayesian Network. In principle, other structures such as neural nets or decision trees could be used to represent that policy.
FIG. 10 shows that the lookahead model can be used as a simulator that interleaves decision- making with simulation.
FIG. 11 shows the interfaces of Oasys, an application of this invention.
FIG. 12 shows the interleaving of execution and lookahead modes in Oasys.
BEST MODE FOR CARRYING OUT WE INVENTION
We will now detail an exemplary embodiment, called Simulation-Based Real-Time Decision- Making (SRDM), of the invention. One skilled in the art, given the description herein, will recognize the utility of the system of the present invention in a variety of contexts in which decision making problems exist. For example, it is conceivable that the system of the present mvention may be adapted to decision making domains existent in organizations engaged in activities such as telecommunications, power generation, traffic management, medical resource management, transportation dispatching, emergency services dispatching, inventory management, logistics, and others. However, for ease of description, as well as for purposes of illustration, the present invention primarily will be described in the context of a factory environment with manufacturing activities.
FIG. 5 shows our system architecture for real-time factory-floor decision-making. The Factory Model consists of information such as the structure of the factory, how each action affects the state of the factory, how often resources fail, and how often are parts defective. This information is used by the Decision-Making Agents for lookahead and decision-making. These agents make a decision independently and in parallel that takes effect on the Factory Floor. The Factory Floor responds with updates to the state which is sensed through the sensors. This information is then fed through the Learning System, which in turn, updates the Factory Model. This cycle continuous indefinitely.
For a specific illustration, consider the routing problem in a simple reliable flexible (one-part with multiple routings) manufacturing system as shown in FIG. 6. This system consists of 5 machines (A to E) arranged in two layers and connected by various route segments. Identical parts arriving from the left side are completely processed when they depart at the right side, after following any of the following alternative routes: A-C, A-D, B-D, or B-E. Thus, there are three decision opportunities: 1. New part: choose either Machine A or B.
2. After Machine A: choose either Machine C or D.
3. After Machine B: choose either Machine D or E.
Thus, the decision alternatives are Left (A, C, and D, respectively, for the three decisions) or Right (B, D, and E, respectively). The route segments as well as the queues in front of each machine are FIFO (first-in-first-out). The operational objective (KPI) is to minimize the average lead time, that is, the average time a part spends in the system (from arrival to departure). In this illustration, the arrival and processing times are exponentially distributed - FIG. 6 also shows the corresponding means (in Minutes). The travel time between each pair of nodes is fixed to 2 Minutes.
In SRDM, the top-level decision maker uses the steps given in FIG.7 to make and apply decisions. In a state (or situation) s, it first generates a list A of possible actions (or decisions or alternatives). For each action in A, it samples the actions of other decision makers based on the stochastic policy p(a | s) (our invention covers the special case of uniform stochastic function, where all p(a | s) have identical values for any given state). For each such sample, it computes the lookahead outcome using the steps given in FIG. 8. It then chooses and applies the action with the lowest associated outcome.
To compute the lookahead outcome (see steps in FIG.8), the decision maker keeps on applying the simulated actions of all decision makers (including itself) and sampling new actions until the lookahead depth (terminal) is reached. It repeats this several times and returns, as outcome, the average of the utility of all the terminals. The utility is the sum of the utility observed until reaching the terminal and the heuristics value of the terminal.
SRDM relies on a discrete event simulation model of the underlying application. Though the simulation uses a fixed policy (deterministic or stochastic, manual or optimized), SRDM does not use that policy to make a decision in the current situation. Instead it runs several simulations (called look-aheads) for a small number of alternative decisions and then selects the decision that optimizes the Key Performance Indicators (KPIs). In short, the look-ahead simulations overcome the myopia and rigidity of the underlying fixed policy by taking into account the longer-term impact of each decision in the current situation. Each look-ahead simulation is used to compute the KPIs by combining the KPIs observed during the look-ahead and the KPIs estimated from the terminal situation in that look-ahead. SRDM is defined by four key parameters:
1. Policy: Which fixed policy to use during the look-ahead simulations?
2. Depth: How long to run each look-ahead simulation? 3. Width: How many look-ahead simulations to run for each decision alternative? 4. Heuristics: Which heuristics to use to estimate the KPIs at the end of each look-ahead simulation? Heuristics are necessary to estimate the KPIs for the work in progress.
For each decision opportunity, SRDM uses the simulation model to generate the required number of depth-restricted look-ahead simulations for each alternative. The KPIs from these look-aheads are averaged and the decision with the best aggregated KPI is chosen.
The real-time constraint is met as follows: SRDM starts with depth 0, where the fixed policy completely determines the decision. SRDM keeps incrementing the depth until the available time runs out or the depth limit is reached. Finally, it chooses the decision based on the last depth for which all the look-aheads were successfully completed. A more sophisticated version of SRDM interleaves both the depth and width increments to provide decisions with a desired statistical confidence level.
Typical examples of fixed policies for this case are:
• Deterministic Policy: Choose the machine with the shortest queue (break ties by choosing the machine on the Right). • Stochastic Policy: The probability of choosing a machine is inversely proportional to its queue length.
• Deterministic local linear: At Dl, choose left if the expression "xQ(A) + yQ(B) + z" is greater than 50, where Q(M) is the length of the queue for a machine M and x,y,z are the optimized using an offline procedure like OptQuest, , a commercial simulation optimization software.
We now describe SRDM's method to learn the stochastic policy from actual decision-making experience using the Maximum Likelihood (ML) principle as the learning framework. SRDM learns the parameters associated with the function p(a\s). For example, we may want to learn the probability of a resource committing to a particular task, given the state information in each buffer and what other tasks are currently executing. According to ML, we choose those parameters θ that maximize the likelihood of the data over. In our case, the data is a set of past situations (states) and the actions of each resource for those states. When the data is iid (independently and identically distributed), this simplifies the ML task to choosing θ such that r
TT p(aj I s , θ) is maximized ( is the data item number; r is the number of data items):
In particular, the function p(a\s) is represented as a Bayesian Network and the ML task simplifies to updating the parameters the BN's conditional. In essence, the updates to the parameters are updates to frequencies of certain events in the data.
For example, the BN, whose schema is shown in FIG. 9, could represent the stochastic policy. Here, the at's represent individual resources and the st's represent attributes of the state (e.g. number of objects in each buffer and what tasks are currently running). The assumption here is that all resources are probabilistically independent: n p{al,a2,...,all \ sl,s2,...,sm) = p(ai \ s1,s2,...,sm)
;=ι
According to the structure defined in the BN above:
, , , , _ p{sx,s2...,sn \ a)p(a)
D p{a \ sl,s2...,sn) — -—-
2_,p{sl,s2...,s„ \ a)p(a) ≡A
Also according the BN above, all of the state attributes are independent: m p(sl,s2,...,sn \ a) = Y[p(si \ a)
1=1
Thus, the ML task simplifies to recording the probability of each state attribute value given each resource action (i.e. which task each resource commits to); the data comes from actual 0 decisions. For discrete-valued state attributes, this amounts to storing the frequency of the attribute value given the resource action. For continuous-valued state attributes, we use a normal distribution for which we compute the sample mean and variance, given each resource action. For continuous-valued conditional attributes (e.g. one continuous-valued state attribute conditional on another), we use a conditional multivariate normal distribution. In this 5 distribution, a child node (indexed by 1) is normally distributed with mean μx +VnV22 (x22) and variance Vn -VxJ 2τ v 2\ , where μ and Fare partitioned comformably as (all the parents are indexed by 2):
Vn V12
M = and V = βι_ Vη 21 22. The sample mean vector and co-variance matrix are easily computed from the data.
Of course, the stochastic policy need not be represented by a Bayesian Network. Other 0 methods such as neural networks, polynomial functions, or decision trees could be used.
Whatever method is used, the learning approach has several advantages. First, it reduces the cost of model building as the same approach can be used to learn the transition-function p(t\a,s) from data. Second, it reduces the cost of model validation since the ML principle is probabilistically sound and thus self- validating. Third, the lookahead model itself can act as a 5 "smart" simulator — one that takes into account decisions by each agent, obviating the development of a separate simulation model for testing. Decision-making can be rigorously evaluated without disrupting the actual application. Fourth, the independence assumption makes sampling easy to compute: we need only compute p(s\a) for the current state and for each possible resource action; from this we can compute p(a\s) for each possible action and sample from a according to p(a\s). The steps to p(a\s) are defined in the above equations.
Finally, all learning can be done prior to deployment since the lookahead-simulator can generate its own training data. As FIG. 10 shows, the decision-maker (a resource) generates a decision and the model simulates that decision and all the other decisions of other decision- makers up to the next decision-point. The effect of those decisions is then input into the learning system, which in turn generates new parameters for the decision-maker.
As our decision-making engine is domain-independent and highly modular, it has the potential to be applied to other complex decision-making tasks such as network routing, medical decision-making, transportation routing, political decision-making, investment planning and military planning. It can also be applied to multiple agents — where parallel action sets a are assumed to be input. Finally, it can be applied in a real-time setting: rather than searching to terminals, it is possible to search to a fixed depth and return a heuristic estimate of the remaining utility instead. The depth begins at 0 and as long as there is decision-making time, the depth is incremented by 1. The action of choice is the best (i.e. lowest utility) action associated with the last completed depth. Integration with a simulation system
In another embodiment, this invention enhances an existing simulation system. Although we illustrate this by enhancing a specific simulation engine SLX; similar approach will work for other simulation systems. SLX is general purpose software for developing discrete event simulations. SLX provides an essential set of simulation mechanisms like event scheduling, generalized wait-until, random variable generation, and statistics collection. SLX also provides basic building blocks, such as queues and facilities, for modeling a wide variety of systems at varying levels of detail. SLX provides no built-in functionality for real-time synchronization or decision-making. Our enhancement, Oasys for SLX, enhances SLX in the following ways:
The user defines a specific performance measures for optimization. Performance measures include operational measures like throughput and cycle time as well as financial measures like profit and market share.
The user does not have to assign a fixed policy for each decision point. Instead, Oasys automatically chooses the decision such that the performance measure is optimized. Oasys relaxes the optimization requirement by considering other constraints such as time. For example, give me the best answer possible within 20 seconds.
® Oasys enhances process specifications by allowing non-deterministic actions - these contentions are also resolved during simulation such that the performance measure is optimized. • Oasys continually learns so as to keep improving the optimisation over time.
• Oasys communicates and synchronizes with external real-time monitoring and control systems.
• Oasys communicates and synchronizes with users in real-time. As shown in FIG. 11, Oasys interacts with the following external systems: Controllers: Systems for controlling the execution systems. These systems actually effect the chosen decision in the real world.
• Monitors: Systems for providing feedback from the execution systems. These systems sense the actual change in the real world. Sensors are needed because an external event may have taken place in the world or the actions might not have had their intended effect.
In addition, it provides a front-end to the users for providing real-time instructions. Oasys consists of three simulation models:
• User model: for simulating users actions
• Execution model: for simulating the actions of execution systems • Control model: for simulating the real-time decisions
The control model interacts with both the user and the execution model.
At any decision point in the control model, Oasys uses SLX to perform a look-ahead with a finite number of alternatives. Each look-ahead involves running one or more simulations for some period of time, determining the performance values at the end of each of those simulations, and combining those values to obtain one set of performance values. Oasys then chooses the best alternative that may get communicated to the user. The user may choose to accept or override this recommendation - the final decision may then get communicated to the external control systems.
Thus, Oasys alternates between the following two modes, as shown in FIG. 12: • Execution mode: Oasys interacts with external execution systems and users.
• Look-ahead mode: Instead of interacting with external execution systems and users, the control model interacts with execution and user models, respectively.
At each decision point, after the look-ahead, Oasys presents its recommendation to the user along with the expected performance values of all the alternatives. The user must take one of the following actions:
• Select the recommendation: Oasys complies with that decision and continues until the next set of recommendations.
• Select an alternative: Oasys complies with that decision and continues until the next set of recommendations. • Forgo selection: Oasys will wait for a specified amount of time, and then select its recommended decision by default.
Each decision selected by the user is immediately communicated to the execution system. Each observation made by the execution system is immediately communicated to Oasys, which relays it to SLX.
The decision points are defined by portions of the simulation where multiple outcomes are under our control. The execution-relevant state is explicitly saved before lookahead and restored whenever needed.
The nodes of the look-ahead tree represent portions of the simulation that are deterministic. The children of a node represent the result of alternative events. Each node is processed (possibly several times) in one of the following ways:
• Terminal node: The performance forecast for the node and the performance heuristic are combined to produce the performance forecast for the decision.
• A new child is generated: If this is the first node, a new child is generated according to the next decision we wish to try, otherwise a child is generated according to a probability distribution and processing passes to the child. Thus, the following could be specified by the designer:
• What are the terminal nodes?
• How are children generated? E.g. probabilistic sampling. • How are performance measures combined? E.g. average, weighted by probabilities. This requires more details on the consequences of the event of going from parent to child. The following are learned, indexed by the relevant parts of the state:
• User model: To anticipate the effects of users' actions.
• Execution system model: To anticipate the effects of observations made by the execution system.
• Performance forecast: To predict the performance measure when a new node is encountered (before any look-ahead for that node).
Other Exemplary Embodiments
Instead of choosing the action with the lowest average outcome given any state s, an alternative embodiment of our invention uses the following approach, where MIN-D and INC-D and MAX-D are depth parameters (positive numbers), MIN-C and INC-C are confidence parameters (numbers between 0 and 100), and MIN-N and INC-N (positive numbers) are iteration limits:
1. Set the lookahead depth D to MIN-D , the confidence level C to MIN-C and the no of samples N to MIN-N 2. Set a to be the result of LookAhead (s, D, C, N)
3. Present a to the decision maker and get the next command
4. If next command is "increase confidence" increment C by INC-C and go to step 2 If next command is "increase depth" increment D by INC-D and go to step 2 If next command is "increase iterations", increment N by INC-N and go to step 2 If next command is "commit action", stop
The function LookAhead(s, D, C, N) is calculated by the following steps, where U(a) define the distribution Ul(a), U2(a) ...,Un(a) for each action a with mean S(a) and standard deviation D(a) and let M(a) be the actual mean being estimated by the sample mean S(a): 1. Set n = l
2. For each alternative action a in state s, perform lookahead to get utility Un(a)
3. Partition the actions into two categories, a non empty Indifference set I and Reject set R, such that a. For any two actions a in I and b in R, the confidence that M(a) > M(b) is more than C. b. For any two actions a and b in I, the confidence level that M(a) > M(b) is at most C.
4. If R is empty and n < N, increment n by 1 and go to step 2
5. Return any action from the set I
If the number of possible actions in a state is large or infinite, only a few of the most probable action are considered, by sampling them using the stochastic policy P(a|s) at state s. The cutoff could be specified in several ways, for example, setting a limit on number of distinct actions.
There is an alternative way to define the sets I, by using another confidence parameter B, no larger than C. This parameter B can also be varied based on the next command, just like the parameter B.
Another approach is to use a stochastic policy to generate the initial probabilities for various alternatives of a decision, and then use look-ahead to refine these probabilities, until some termination criterion is satisfied.
In general, the look-ahead strategy can be specified using the following:
Alternative generation, prioritization, and elimination.
Sampling sequence
Sample generation
Sample termination
Terminal heuristics computation
Sample KPI computation
KPI aggregation
Look-ahead termination 9. Alternative selection
Instead of using a standard execution model simulator for the lookahead, a faster abstract model may be used. This model, implemented in a general purpose programming language like C++, may be learned using observations made on the real execution system or its simulator. If there are multiple concurrent decisions to be made, one could construct a dependency graph among them, based on whether a decision impacts another. Except for the cycles in this graph, the rest of the decisions may be then serialized. For multiple inter-dependent decisions, several approaches may be used:
1. Treat them as one complex decision (alternatives multiply) 2. Approximate them by a sequence of decisions (alternatives add up)
3. Intermediate approaches (may be based on standard optimization approaches like local search, beam search, evolutionary algorithms, etc.) More Complex Belief networks
Instead of using a simple belief network, alternative embodiments of our invention use more complex belief networks, including
1. Hierarchical variables in belief networks
2. Belief networks with abstract data types:
3. Belief networks with user-defined parameterized (re-usable) functions .
4. Higher-order Belief networks models with differentials 5. POMDPs (Partially Observable MDP) where the belief- vector distribution is represented by a Belief network itself, which is updated through the application of decisions and observations.
We detail these alternative embodiments below. 1. Belief Networks with Hierarchical Variables An important method of dealing with complexity is to place things in hierarchy. For example, the animal kingdom's taxonomy makes it easy for scientists to understand where each organism is located in that kingdom. Zip codes, which are hierarchically coded for each region, make it easier for the post office to distribute mail. The usual way of dealing with such hierarchies in a Belief Network is to use an "excess" encoding that represents non-sensical combinations as a zero probability. For example, if objects a particular universe are Male or Female and Females are additionally Pregnant or Non-Pregnant, then the usual way of encoding such a hierarchy is to represent one node in the Belief Network for Sex and another node for Pregnant (a Boolean). This method requires the user to specify a zero probability for Male and Pregnant.
In our embodiment, there is only one node, namely representing Male or Female and if Female, then Pregnant or Non-Pregnant. The values for the variable at the node are three-fold: Male, Female/Pregnant, and Female/Non-Pregnant. Male and Female have their prior probabilities and Pregnant is conditional on Female. This reduces the memory requirements and makes it simpler to learn such information from data using standard learning techniques.
This method of hierarchical variable can also be extended to clustering, where the hierarchies are given, but the parameters are not known. Standard learning methods such as EM can be used to fit the hierarchies to data.
Finally, these hierarchies can be generalized to partial orders, where objects may belong to more than one parent class. The extension is relatively straightforward: a hierarchical variable can now be conditional on two or more parents, just as in standard Belief Networks. 2. Belief Networks with Abstract Data Types
Abstract data types in our embodiment include but are not limited to: • Stacks
Queues
Priority Queues • Records
Matrices Association Lists Count Lists Existence tables • Sets
Bags
Programming languages have long used such data types to make it simpler for programmers to develop their applications without having to specify functions that operate on those types or the details of how those functions operate. The motivation for use in Belief Networks is similar: after specifying the type of each variable, the user can specify how such a variable changes with each decision using the built-in functions and tests that operate on those types. For example, in manufacturing, the user often requires queues to describe a factory process and the built-in type Queue makes it simple for the user to specify a queue. 3. Higher-Order Belief Networks with Differentials Sometimes one would like to specify the effects of a decision or a policy in terms of information from previous states. The stochastic policy can be generalized to include information from previous states. For example p(a\s,t) captures the idea of that the probability of an action depends on the state s and some other previous state t, which could be a vector of previous states. Similarly, the transition function can be generalized to include information from previous states: p(w\a,s,t). More generally, the policy and transition might depend on a differential vector of previous states rather than the states themselves. For example, the acceleration (a first-order difference), might be required for decision-making in an application for an autonomous guided vehicle.
4. Belief Networks with User-Defined Functions To declare a function that a user can reuse, the user must specify certain information as in any modern programming language: parameters, local variables, and other functions within their scope. All of these can be referred to in the Bayes Network for the definition of a function. Once defined, functions can be re-used just as any built-in function. This is an important way for the user to extend the set of built-in functions to suit a particular application and to facilitate re-use. The function definition involves the specification of a Belief Network possibly with multiple nodes, but with a single output node representing the result of the function. The other nodes can be a combination of parameters, local variables, or variables global in scope to that function. Each node can reference other functions or recursively reference the current function.
5. Belief Networks with POMDPs as Embodied by Belief Network Representations of the Distribution
In a POMDP, the state is represented by a belief vector that captures the probability distribution associated with a particular state. Our additional embodiment is to represent this belief vector as a Belief Network itself. This network can be made arbitrarily complex, depending on the user's specification. For example, the x, y position of a vehicle might be a multi-variate normal distribution or it might be a univariate noπnal distribution. Events (actions or observations) cause this Belief Network to be updated according to the underlying transition function. However, unlike in standard MDPs or standard POMDPs, observations cause a change in state just as with actions. The way this change in state takes place can include, but is not limited to: user-specified operations on the Belief Network and multiple sampling of the belief network according the transition function. In the case of sampling, the weights of the network are adjusted for each prior combination of variables, for each child-node and parent nodes. This compact representation of a belief vector makes allows the solution of POMDPs of greater complexity than before. Moreover, it generalizes approaches such as Kalman Filtering.
Having described the exemplary embodiments of the invention, additional advantages and modifications will readily occur to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. Therefore, the specification and examples should be considered exemplary only, with the true scope and spirit of the invention being indicated by the enclosed claims.

Claims

Claims
1. A method for optimizing an active decision making process, comprising: a. creating a simulation model for the active decision making process; b. generating a plurality of alternative decisions at a choice point in the active decision making process; c. for one of these alternative decisions, generating a simulation of the future decision making process using the simulation model; and d. analyzing the result of this simulation to select a decision for the choice point.
2. The method of claim 1, wherein the simulation model comprises a stochastic component.
3. The method of claim 2, wherein the stochastic component comprises a policy for choosing among alternative decisions.
4. The method of claim 1, wherein two simulations for an alternative decision are analyzed.
5. The method of claim 1, wherein the simulation model comprises of a Bayesian network.
6. The method of claim 3 , wherein the Bayesian network comprises hierarchical variables, abstract data types, differentials, user-defined functions, or POMDPs.
7. The method of claim 1 , further integrating the active decision making process with an external application.
8. The method of claim 7, wherein the external application comprises a simulation system.
9. The method of claim 7, wherein the simulation model is updated using the data obtained by monitoring the external application.
10. The method of claim 1, wherein the simulation model is updated using the result of the simulation.
11. A computer implemented system for optimizing an active decision making process, comprising: a. a simulation model for the active decision making process; b. generation of a plurality of alternative decisions at a choice point in the active decision making process; c. for one of these alternative decisions, generation of a simulation of the future decision making process using the simulation model; and d. analysis of the result of this simulation to select a decision for the choice point.
12. The system of claim 11, wherein the simulation model comprises a stochastic component.
13. The system of claim 12, wherein the stochastic component comprises of apolicy for choosing among alternative decisions.
14. The system of claim 13, wherein two simulations for an alternative decision are analyzed.
15. The system of claim 11 , wherein the simulation model comprises of a Bayesian network.
16. The system of claim 13, wherein the Bayesian network comprises hierarchical variables, abstract data types, differentials, user-defined functions, or POMDPs.
17. The system of claim 11, further integrating the active decision making process with an external application.
18. The system of claim 17, wherein the external application comprises a simulation system.
19. The system of claim 17, wherein the simulation model is updated using the data obtained by monitoring the external application.
20. The system of claim 11, wherein the simulation model is updated using the result of the simulation.
PCT/IB2004/050401 2003-04-10 2004-04-05 Optimizing active decision making using simulated decision making WO2004090659A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/552,645 US20060200333A1 (en) 2003-04-10 2004-04-05 Optimizing active decision making using simulated decision making

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US46221003P 2003-04-10 2003-04-10
US60/462,210 2003-04-10

Publications (2)

Publication Number Publication Date
WO2004090659A2 true WO2004090659A2 (en) 2004-10-21
WO2004090659A3 WO2004090659A3 (en) 2005-07-21

Family

ID=33159846

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2004/050401 WO2004090659A2 (en) 2003-04-10 2004-04-05 Optimizing active decision making using simulated decision making

Country Status (2)

Country Link
US (1) US20060200333A1 (en)
WO (1) WO2004090659A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160004987A1 (en) * 2014-07-04 2016-01-07 Tata Consultancy Services Limited System and method for prescriptive analytics
CN111125848A (en) * 2019-11-25 2020-05-08 李林卿 Dangerous goods transportation network emergency rescue resource allocation method
CN111565990A (en) * 2018-01-08 2020-08-21 伟摩有限责任公司 Software validation for autonomous vehicles

Families Citing this family (59)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7788119B2 (en) * 2003-05-09 2010-08-31 I2 Technologies Us, Inc. System providing for inventory optimization in association with a centrally managed master repository for core reference data associated with an enterprise
US20050251556A1 (en) * 2004-05-07 2005-11-10 International Business Machines Corporation Continuous feedback-controlled deployment of message transforms in a distributed messaging system
US20050251811A1 (en) * 2004-05-07 2005-11-10 International Business Machines Corporation Distributed messaging system supporting stateful
US7886180B2 (en) * 2004-05-14 2011-02-08 International Business Machines Corporation Recovery in a distributed stateful publish-subscribe system
US20070297327A1 (en) * 2006-06-27 2007-12-27 International Business Machines Corporation Method for applying stochastic control optimization for messaging systems
US8321258B2 (en) * 2006-08-31 2012-11-27 Sap Aktiengeselleschaft Vehicle scheduling and routing with compartments
US8255524B2 (en) * 2007-03-08 2012-08-28 Telefonaktiebolaget L M Ericsson (Publ) Arrangement and a method relating to performance monitoring
US8438003B2 (en) * 2007-04-12 2013-05-07 Cadence Design Systems, Inc. Methods for improved simulation of integrated circuit designs
US9058449B2 (en) * 2007-05-30 2015-06-16 Credit Suisse Securities (Usa) Llc Simulating machine and method for determining sensitivity of a system output to changes in underlying system parameters
US7899763B2 (en) * 2007-06-13 2011-03-01 International Business Machines Corporation System, method and computer program product for evaluating a storage policy based on simulation
US8504621B2 (en) * 2007-10-26 2013-08-06 Microsoft Corporation Facilitating a decision-making process
US8255348B2 (en) * 2008-03-07 2012-08-28 Honeywell International Inc. Apparatus and method for order generation and management to facilitate solutions of decision-making problems
US8655822B2 (en) * 2008-03-12 2014-02-18 Aptima, Inc. Probabilistic decision making system and methods of use
US9170839B2 (en) * 2008-05-22 2015-10-27 Nokia Technologies Oy Method for job scheduling with prediction of upcoming job combinations
WO2010045272A1 (en) * 2008-10-14 2010-04-22 Honda Motor Co., Ltd. Smoothed sarsa: reinforcement learning for robot delivery tasks
US8234295B2 (en) * 2009-06-03 2012-07-31 International Business Machines Corporation Managing uncertain data using Monte Carlo techniques
US8634982B2 (en) * 2009-08-19 2014-01-21 Raytheon Company System and method for resource allocation and management
US20110071971A1 (en) * 2009-09-22 2011-03-24 Microsoft Corporation Multi-level event computing model
US9195961B1 (en) 2009-11-30 2015-11-24 Amdocs Software Systems Limited System, method, and computer program for generating channel specific heuristics
US20110282801A1 (en) * 2010-05-14 2011-11-17 International Business Machines Corporation Risk-sensitive investment strategies under partially observable market conditions
US8396730B2 (en) * 2011-02-14 2013-03-12 Raytheon Company System and method for resource allocation and management
US8799201B2 (en) 2011-07-25 2014-08-05 Toyota Motor Engineering & Manufacturing North America, Inc. Method and system for tracking objects
US20140188449A1 (en) * 2011-08-01 2014-07-03 Reinhold Achatz City Lifecycle Management
EP2788927A4 (en) * 2011-12-09 2015-08-12 Exxonmobil Upstream Res Co Method of stimulating shipping of liquefied natural gas
US8874615B2 (en) * 2012-01-13 2014-10-28 Quova, Inc. Method and apparatus for implementing a learning model for facilitating answering a query on a database
US10552764B1 (en) 2012-04-27 2020-02-04 Aptima, Inc. Machine learning system for a training model of an adaptive trainer
US10290221B2 (en) 2012-04-27 2019-05-14 Aptima, Inc. Systems and methods to customize student instruction
US10438156B2 (en) 2013-03-13 2019-10-08 Aptima, Inc. Systems and methods to provide training guidance
US9390386B2 (en) * 2013-03-15 2016-07-12 Neustar Ip Intelligence, Inc. Methods, systems, and apparatus for predicting characteristics of a user
US9009780B2 (en) 2013-03-15 2015-04-14 Facebook, Inc. Privacy verification tool
US8726203B1 (en) * 2013-04-25 2014-05-13 Cydesign, Inc. System and method for generating virtual test benches
EP4047530A1 (en) 2013-05-30 2022-08-24 President and Fellows of Harvard College Systems and methods for performing bayesian optimization
US9390379B2 (en) * 2013-07-23 2016-07-12 Neustar Ip Intelligence, Inc. Methods, systems, and apparatus for learning a model for predicting characteristics of a user
US20150058078A1 (en) * 2013-08-26 2015-02-26 Microsoft Corporation Rule to constraint translator for business application systems
US9922123B2 (en) 2014-01-10 2018-03-20 Facebook, Inc. Policy performance ordering
US9276960B2 (en) 2014-04-02 2016-03-01 Facebook, Inc. Policy partial results
US9460308B2 (en) 2014-05-09 2016-10-04 Facebook, Inc. Multi-level privacy evaluation
US20150363518A1 (en) * 2014-06-11 2015-12-17 International Business Machines Corporation Dynamic operating procedures for emergency response
US10291652B2 (en) * 2014-07-25 2019-05-14 Facebook, Inc. Policy evaluation trees
US10235686B2 (en) 2014-10-30 2019-03-19 Microsoft Technology Licensing, Llc System forecasting and improvement using mean field
US10311467B2 (en) * 2015-03-24 2019-06-04 Adobe Inc. Selecting digital advertising recommendation policies in light of risk and expected return
CN104699983B (en) * 2015-03-24 2018-08-03 清华大学 Confronting simulation optimization method and system
US20170185933A1 (en) * 2015-06-14 2017-06-29 Jda Software Group, Inc. Distribution-Independent Inventory Approach under Multiple Service Level Targets
US9893529B1 (en) 2015-10-21 2018-02-13 University Of South Florida Coupling dynamics for power systems with iterative discrete decision making architectures
US10074066B2 (en) * 2016-01-16 2018-09-11 International Business Machines Corporation Two phase predictive approach for supply network optimization
JP6962539B2 (en) * 2017-02-24 2021-11-05 株式会社レクサー・リサーチ Business plan optimization method
WO2018170444A1 (en) 2017-03-17 2018-09-20 The Regents Of The University Of Michigan Method and apparatus for constructing informative outcomes to guide multi-policy decision making
US20180349158A1 (en) * 2017-03-22 2018-12-06 Kevin Swersky Bayesian optimization techniques and applications
US10762423B2 (en) 2017-06-27 2020-09-01 Asapp, Inc. Using a neural network to optimize processing of user requests
US10885240B2 (en) * 2017-11-02 2021-01-05 Uatc, Llc Deterministic simulation framework for autonomous vehicle testing
SE542387C2 (en) 2017-12-20 2020-04-21 Scania Cv Ab Method and control arrangement in a transportation surveillance system, monitoring a system comprising autonomous vehicles, for assisting a human operator in predictive decision making
DE102018109691A1 (en) * 2018-04-23 2019-10-24 Isabell Franck UG (haftungsbeschränkt) Method for computer-assisted production optimization of at least one production step
JP2023533225A (en) 2020-07-01 2023-08-02 メイ モビリティー,インコーポレイテッド Methods and systems for dynamically curating autonomous vehicle policies
US20220108256A1 (en) * 2020-10-02 2022-04-07 dcyd, Inc. System and method for decision system diagnosis
US11396302B2 (en) 2020-12-14 2022-07-26 May Mobility, Inc. Autonomous vehicle safety platform system and method
JP2024500672A (en) 2020-12-17 2024-01-10 メイ モビリティー,インコーポレイテッド Method and system for dynamically updating an autonomous agent's environmental representation
WO2022212944A1 (en) 2021-04-02 2022-10-06 May Mobility, Inc. Method and system for operating an autonomous agent with incomplete environmental information
US11565717B2 (en) 2021-06-02 2023-01-31 May Mobility, Inc. Method and system for remote assistance of an autonomous agent
US11814072B2 (en) 2022-02-14 2023-11-14 May Mobility, Inc. Method and system for conditional operation of an autonomous agent

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5461699A (en) * 1993-10-25 1995-10-24 International Business Machines Corporation Forecasting using a neural network and a statistical forecast
US6725208B1 (en) * 1998-10-06 2004-04-20 Pavilion Technologies, Inc. Bayesian neural networks for optimization and control

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4992942A (en) * 1989-01-25 1991-02-12 Bahm, Inc. Apparatus and method for controlling a system, such as nutrient control system for feeding plants, based on actual and projected data and according to predefined rules
US6853952B2 (en) * 2003-05-13 2005-02-08 Pa Knowledge Limited Method and systems of enhancing the effectiveness and success of research and development

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5461699A (en) * 1993-10-25 1995-10-24 International Business Machines Corporation Forecasting using a neural network and a statistical forecast
US6725208B1 (en) * 1998-10-06 2004-04-20 Pavilion Technologies, Inc. Bayesian neural networks for optimization and control

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160004987A1 (en) * 2014-07-04 2016-01-07 Tata Consultancy Services Limited System and method for prescriptive analytics
US10586195B2 (en) * 2014-07-04 2020-03-10 Tata Consultancy Services Limited System and method for prescriptive analytics
CN111565990A (en) * 2018-01-08 2020-08-21 伟摩有限责任公司 Software validation for autonomous vehicles
EP3710327A4 (en) * 2018-01-08 2021-09-01 Waymo LLC Software validation for autonomous vehicles
US11210200B2 (en) 2018-01-08 2021-12-28 Waymo Llc Software validation for autonomous vehicles
US11645189B2 (en) 2018-01-08 2023-05-09 Waymo Llc Software validation for autonomous vehicles
CN111565990B (en) * 2018-01-08 2023-07-25 伟摩有限责任公司 Software verification for autonomous vehicles
CN111125848A (en) * 2019-11-25 2020-05-08 李林卿 Dangerous goods transportation network emergency rescue resource allocation method

Also Published As

Publication number Publication date
US20060200333A1 (en) 2006-09-07
WO2004090659A3 (en) 2005-07-21

Similar Documents

Publication Publication Date Title
US20060200333A1 (en) Optimizing active decision making using simulated decision making
Geffner et al. A concise introduction to models and methods for automated planning
Grosan et al. Intelligent systems
US7437335B2 (en) Method and system for constructing cognitive programs
Schott Fault tolerant design using single and multicriteria genetic algorithm optimization
US20020095393A1 (en) Computer program for and method of discrete event computer simulation incorporating biological paradigm for providing optimized decision support
Jiménez Celorrio Planning and learning under uncertainty
Busoniu et al. Learning and coordination in dynamic multiagent systems
Eberbach The $-calculus process algebra for problem solving: A paradigmatic shift in handling hard computational problems
Dahl The lagging anchor algorithm: Reinforcement learning in two-player zero-sum games with imperfect information
Brihaye et al. Quantitative Reachability Stackelberg-Pareto Synthesis Is NEXPTIME-Complete
Koop Investigating experience: Temporal coherence and empirical knowledge representation
Sahin A Bayesian network approach to the self-organization and learning in intelligent agents
Núñez-Molina et al. A Review of Symbolic, Subsymbolic and Hybrid Methods for Sequential Decision Making
Perreault On the usability of continuous time Bayesian networks: Improving scalability and expressiveness
Bruyère et al. As Soon as Possible but Rationally
Wasser An object-oriented representation for efficient reinforcement learning
FILIOT Monte Carlo Tree Search with Advice
Plaat et al. Reinforcement Learning
Ashok A Learning Twist on Controllers: Synthesis via Partial Exploration and Concise Representations
Rubin On the Construction, Maintenance and Analysis of Case-Based Strategies in Computer Poker
Tang Reinforcement Learning: New Algorithms and An Application for Integer Programming
de Oliveira A Modular Architecture for Model-Based Deep Reinforcement Learning
Kwiatkowski Simulating crowds with reinforcement learning
Delazeri Generating heuristics for the quadratic unconstrained binary optimization problem using automatic algorithm configuration

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 10552645

Country of ref document: US

122 Ep: pct application non-entry in european phase
WWP Wipo information: published in national office

Ref document number: 10552645

Country of ref document: US