Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20070174185 A1
Publication typeApplication
Application numberUS 11/515,470
Publication dateJul 26, 2007
Filing dateSep 2, 2006
Priority dateOct 3, 2002
Also published asUS7103597, US20040068501
Publication number11515470, 515470, US 2007/0174185 A1, US 2007/174185 A1, US 20070174185 A1, US 20070174185A1, US 2007174185 A1, US 2007174185A1, US-A1-20070174185, US-A1-2007174185, US2007/0174185A1, US2007/174185A1, US20070174185 A1, US20070174185A1, US2007174185 A1, US2007174185A1
InventorsDavid McGoveran
Original AssigneeMcgoveran David O
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Adaptive method and software architecture for efficient transaction processing and error management
US 20070174185 A1
Abstract
A new type of transaction manager is disclosed that provides a unique set of methods and components for efficient transaction processing, error management, and transaction recovery. The combination of these methods and components are applicable to a wide range of business and technical scenarios that do not lend themselves to traditional transaction processing methods, permitting a degree of automation and robustness hitherto impossible. The methods extend and generalize the traditional transaction properties of atomicity, consistency, isolation, and durability.
Images(6)
Previous page
Next page
Claims(26)
182. A method for efficient transaction processing and error management implemented as an Adaptive Transaction Manager (‘ATM’), the method being extensible to multiple business entities (related or independent) and extensible to complex transactions, said method comprising:
a coordinated set of sub-methods, extensible to and instantiable upon a distributed network of computers, each particular sub-method and any set thereof also being usable by either a unitary database management system or a distributed database management system, said sub-methods comprising steps for:
implementing transaction consistency points;
implementing transaction relaying;
implementing corrective transactions;
implementing lookahead-based resource management; and,
implementing dependency-based concurrency optimization.
183. A general-purpose computer incorporating specific hardware and software for manipulating at least one database when processing at least one transaction, wherein said specific hardware and software comprise:
means for implementing transaction consistency points;
means for implementing transaction relaying;
means for implementing corrective transactions;
means for implementing lookahead-based resource management; and,
means for implementing dependency-based concurrency optimization.
184. A general-purpose computer that includes software, dynamic and stable memory, and logical processing hardware, programmed for manipulating at least one database when processing at least one transaction and manipulating steps in at least one transaction, comprising:
means for manipulating the software, logical processing hardware, and dynamic and stable memory, to designate a set of current data values for any part of the data in the database and any particular step in a transaction, as a transaction consistency point;
means for manipulating the software, logical processing hardware, and dynamic and stable memory, to select at least one set of current data values for any part of the data in the database, and to manipulate any set of particular steps in at least two transactions, to effectuate transaction relaying;
means for manipulating the software, logical processing hardware, and dynamic and stable memory, upon detection of an error condition, to selectively effectuate implementation of at least one corrective transaction;
means for manipulating the software, logical processing hardware, and dynamic and stable memory, to automatically implement optimization of the use of said logical processing hardware and dynamic and stable memory through altering the steps in a definition of said transaction using lookahead-based resource management; and,
means for manipulating the software, logical processing hardware, and dynamic and stable memory, to automatically manipulate the steps of said transaction and software, and automatically implement optimization of said logical processing hardware and dynamic and stable memory, for the processing of said transaction, through implementation of dependency-based concurrency optimization.
185. A method as in claim 182 wherein the ATM is applied to at least one member of a set of business problems comprising telecommunications, retail, inventory, funds transfer, message repair, financial transactions, government fiats, negotiation, asset exchanges, distributed business transactions, electronic commerce, business process automation, business-to-business exchanges, business integration, insurance, and billing.
186. A method as in claim 183 further comprising using transaction relaying to implement non-flat transactions.
187. A computerized method for both efficient transaction processing implemented as a defining feature of an Adaptive Transaction Manager (‘ATM’) and for determining a first transaction, said method comprising:
(a) identifying a first set of consistency conditions on a first set of data elements, comprising at least a first consistency condition;
(b) identifying a second set of consistency conditions on a second set of data elements, comprising at least a second consistency condition, without requiring the second set of consistency conditions to be distinct from the first set of consistency conditions;
(c) associating the first set of consistency conditions with a first set of operations comprising at least one operation on at least one element from the combined first set of data elements and second set of data elements, the first set of operations having an initial state and a final state, said final state being:
represented by the second set of data elements;
required to satisfy the second set of consistency conditions;
reached upon successful termination of the first set of operations;
consistent with the second set of consistency conditions;
computed from both the initial state and any parameters; and,
resulting from unexceptional execution;
(d) specifying the initial state of the first set of operations as being the first transaction's initial state;
(e) performing at least one operation of the first set of operations;
(f) specifying the final state of the first set of operations as being the first transaction's final state; and,
(g) committing the first transaction automatically after determining that the first transaction's final state satisfies the second set of consistency conditions.
188. A method as in claim 187 for implementing a first implicit transaction as the first transaction wherein an explicit transaction directive to begin the first transaction does not precede any operation in the first set of operations as any part of the first transaction's initial state, an explicit transaction directive to end the first transaction does not follow the first set of operations' final operation as any part of the first transaction's final state, and every operation necessary to initiate and to end the first transaction is performed automatically.
189. A method as in claim 187 further comprising guaranteeing that the first transaction satisfies at least one member of a set of transaction properties comprising atomicity, consistency, isolation, and durability.
190. A method as in claim 188 wherein the step of committing the first implicit transaction guarantees the property of atomicity by performing the step if and only if each and every operation of the first set of operations is both successful and representable as a connected set of state transitions resulting from the first set of operations, said first set of operations being fully determined at the first transaction's final state.
191. A method as in claim 187 further comprising guaranteeing at least partially the property of consistency by:
defining a class of consistency conditions prior to reaching the first transaction's initial state; and,
determining that the first set of consistency conditions belongs to the class of consistency conditions.
192. A method as in claim 188 further comprising guaranteeing at least partially the property of consistency by:
defining a class of consistency conditions prior to the first implicit transaction's final state being reached; and,
determining that the second set of consistency conditions belongs to the class of consistency conditions.
193. A method as in claim 187 wherein the step of committing the first transaction guarantees the property of consistency by performing the step if and only if:
the first transaction's initial state is determined to satisfy some set of consistency conditions belonging to a first class of consistency conditions defined prior to the first transaction reaching the first transaction's initial state; and,
the first transaction's final state is determined to satisfy some set of consistency conditions belonging to a second class of consistency conditions defined prior to the first transaction reaching the first transaction's final state;
wherein the first class of consistency conditions and the second class of consistency conditions may be one and the same class of consistency conditions.
194. A method as in claim 187 further comprising guaranteeing the property of isolation and incorporating the steps of:
identifying a first sharable resource as being a portion of a first intermediate state of the first transaction;
identifying at least a second transaction that has not terminated;
determining that the portion of the first intermediate state of the first transaction is not inconsistent, non-contradictory, and non-conflicting with the second transaction;
controlling sharing of the first sharable resource among the first transaction and the second transaction, including access by the second transaction to the first sharable resource, based on the step of determining.
195. A method as in claim 194 wherein the step of determining further comprises:
identifying a common history of the first sharable resource that is consistent with both the first transaction's definition and history and with the second transaction's definition and history, said common history being functionally equivalent to a partially ordered set of states and state transitions of the first sharable resource, each said state transition corresponding to an operation capable of generating that state transition from one of said states.
196. A method as in claim 194 wherein the step of controlling further comprises:
precluding sharing of the first sharable resource among the first transaction and the second transaction when both the first transaction and the second transaction could not have accessed a common initial state of the first sharable resource given the known states of the first sharable resource, when those states existed and when the first transaction and the second transaction began.
197. A method as in claim 194 further comprising:
rewriting at least one rewritable operation of any of the first transaction, the second transaction, and a second implicit transaction so as to be recorded as having been executed in the context of a different transaction;
executing the at least one rewritable operation in the context of the different transaction; and,
using the result of said step of rewriting as component in a recoverable record of all operations and states logically necessary to maintain the common history of the first sharable resource.
198. A method as in claim 197 wherein the different transaction is a new implicit transaction and the original, pre-rewriting result of said rewritable operation is then not committed in the context of any of the first transaction, the second transaction, and the second implicit transaction.
199. A method as in claim 187 wherein the property of durability is guaranteed by ensuring that the first transaction's final state is recoverable insofar as the first transaction's final state has any effect on transaction history at the time of recovery.
200. A method as in claim 199 wherein the first transaction's final state is recovered by recomputing the first transaction's final state.
201. A method as in claim 187 further comprising:
selecting a first alternative final state from among a set of alternative final states;
basing the selection on at least one member of a set of acceptability critera comprising any of measures of risk, measures of opportunity, measures of cost, and measures of benefit; and,
ensuring that the final operation within the first transaction will yield the selected alternative final state.
202. A method as in claim 187 further comprising:
stating a first goal;
deriving a first goal-oriented transaction's definition by selecting from a second set of operations a sequence of transitions from the first goal-oriented transaction's initial state to the first goal-oriented transaction's final state; and,
determining that said first goal-oriented transaction's final state satisfies the first goal; and,
executing the first goal-oriented transaction.
203. A method as in claim 187 further comprising maintaining in durable storage at least one member of a set of audit log enhancements, said set of audit log enhancements comprising identification of an acceptable state, identification of a mistaken state, identification of a compensating transaction, identification of a corrective transaction, and identification of a final acceptable state.
204. A method as in claim 188 further comprising:
maintaining in durable storage at least one member of a set of audit log enhancements comprising identification of an acceptable state, identification of a mistaken state, identification of a compensating transaction, identification of a corrective transaction, identification of a final acceptable state, and, identification of the first implicit transaction.
205. A method as in claim 203 wherein the at least one member of a set of audit log enhancements is maintained in an audit trail.
206. A method as in claim 204 wherein the at least one member of a set of audit log enhancements is maintained in a transaction log.
207. A general-purpose computer incorporating specific hardware and software for manipulating at least one database when processing at least one transaction, wherein said specific hardware and software comprise:
Parser means for any of the set of interpreting and compiling transaction definitions;
Repository means for storing, retrieving, and modifying elements of transaction metadata, including at least one of transaction definitions, consistency conditions, sets of consistency conditions, classes of consistency conditions, dependencies, publication/subscription definitions, audit log enhancements, and transaction resources;
Repository Manager means for coordinating all stored information, including dependencies, transaction definitions, associations, sets of consistency conditions, classes of consistency conditions, audit log enhancements, consistency categories, and subscriptions;
Consistency Manager means for detecting and verifying transaction consistency points, verifying consistency of transaction and resource histories, and defining implicit transactions;
Dependency Manager means for interpreting dependency directives, detecting dependencies, determining transaction and resource histories, deriving sequences of operations to attain specified states as goals, identifying consistent groups based on dependencies and asserting the corresponding consistency points;
Resource Manager means for implementing transaction relaying and lookahead-based resource management, accessing and updating resources, allocation management, scheduling, resource isolation, maintaining cache, maintaining other resource constraints, detecting resource requirements, implementing resource management directives, and providing resource management directives to the Restructuring Processor;
Correction Processor means for implementing corrective transactions, correlating abnormal conditions and consistency points, and by any set of using direct association and using any of consistency condition categories and classes of consistency conditions, performing any of discovering, optimally selecting, and creating a corrective transaction and submitting the corrective transaction to the Execution Manager;
Restructuring Processor means for rewriting transactions;
Isolation Manager means for guaranteeing isolation of resources and transactions;
Publication/Subscription Manager means for processing publication and subscription definitions, detecting publication events, and notifying appropriate subscribers of publication events;
Execution Manager means for processing transactions, allocating and deallocating transaction contexts, passing directives and instructions to the appropriate ATM components, and orchestrating transaction scheduling, commit, rollback, and rollforward; and,
Resource Scheduler means for implementing dependency-based concurrency optimization.
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This is a division in part of Ser. No. 10/263,589, filed on Oct. 2, 2002. The USPTO issued a restriction requirement on Jan. 12, 2006 requiring the prosecution of either claims 93-181, which invention was classified as belonging to class 707, subclass 8; or claims 182-184, which invention was classified as belonging to class 707, subclass 202. Prosecution of claims 93-181 of the first invention continued under the above-referenced application and serial number. This divisional application is filed to continue the prosecution, separately, of the invention described in claims 182-184, and expressly incorporates both below and by reference all of the original, pre-divisional application's specification and drawings.

STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT

Not Applicable

DESCRIPTION OF ATTACHED APPENDIX

Not Applicable

BACKGROUND OF THE INVENTION

A transaction can be defined as a set of actions on a set of resources or some subset thereof, said actions including changes to those resources. The initial state of a set of resources that will be changed by a transaction is defined as being consistent, and so either implicitly or explicitly satisfy a set of consistency conditions (a.k.a. constraints or integrity rules). Each particular transaction includes one or more operations that may alter the resources (e.g. addition, subtraction, selection, exchange, or transformation). Once defined, the transaction creates a delimitible set of changes from initial conditions. Each change to the resources (short of the final change) creates an intermediate state of those resources, which often are not intended to be accessible to other transactions.

Under such an implementation, each transaction operates on the set of resources in an initial state and, after any operations performed by the transaction, leaves the set of resources in a final state. Thus a transaction may be viewed as a means of transforming a set of resources from an initial consistent state to a final consistent state (possibly, but generally not the same as the initial).

Transaction processing is subject to multiple difficulties. A transaction may use resources inefficiently. Transactions may fail to complete operations as designed. Errors may cause the final state to be inconsistent. Transactions may execute too slowly. Such difficulties can be handled manually if the environment is simple enough. Automated or semi-automated means (as supplied, for example, by a transaction management facility) are required in more sophisticated situations.

An environment in which transactions operate is often subject to a transaction management facility, often referred to simply as a “transaction manager.” The responsibility of a transaction manager is to ensure the initial and final states are consistent and that no harmful side effects occur in the event that concurrent transactions share resources (isolation). A transaction manager typically enforces the isolation of a specific transaction using a default concurrency control mechanism (e.g., pessimistic or optimistic). If a condition such as an error occurs before the final state is reached, it is often the responsibility of a transaction management facility to return the system to the initial state. This sort of automated transaction processing lies behind the greatest volume of financial and commercial transactions extant in modern society.

Automated transaction processing, both with and without transaction management facilities, has been designed traditionally with an unspoken assumption that errors are exceptional. The programming, both its design and coding, focuses on implementing transactions in a near-perfect world where it is permissible to simply start over and redo the work if anything goes wrong. Even if this were to model accurately the majority of automated commercial transactions, it would not reflect the entirety of any business's real world experience. In the real world, eighty percent or more of the management effort and expertise is about handling exceptions, mistakes, and imperfections. In automated transaction processing, error recovery mechanisms are usually seen as an afterthought, a final ‘check-box’ on the list of features and transactions that can be handled (if all goes perfectly).

A naïve approach to the implementation of complex automated transaction processing systems maintains that the system resulting from integrating (via transactional messaging) a set of applications that already have error recovery mechanisms will itself recover from errors. Experience and careful analysis have shown that nothing could be further from the truth. As more and more business functions are integrated, the problems of automated error recovery become increasingly important and complex. Errors can propagate just as rapidly as correct results, but the consequences can be devastating.

As more and more business functions are integrated, the problems of automated error recovery and resource management become increasingly important. It's only natural that many of the systems that a business automates first are deemed by that business to enable the execution of its core competencies, whose completion is ‘mission critical’. Automation demands the reliability we associate with transaction management if error recovery is to be robust. With each success at automating a particular business transaction, the value of connecting and integrating disparate automated transactions increases. Separate transactions, each of them simple, when connected become a complex transaction. With each integrative step, the need for acceptable error recovery becomes ever more important.

Traditional approaches to automated transaction management emphasize means to guarantee the fundamental properties of a properly defined or ‘formal’ transaction, which are atomicity, consistency, isolation, and durability. These properties are usually referred to by their acronym, ACID. Transactions, especially if complex, may share access to resources only under circumstances that do not violate these properties, although the degree to which transaction management facilities strictly enforce the isolation property is often at the discretion of the user.

It is not uncommon to refer to any group of operations on a set of resources (i.e., a unit of work) as a transaction, even if they do not completely preserve the ACID properties. In keeping with this practice, we will use the term transaction without a qualifying adjective or other modifier when referring a unit of work of any kind whether formal or not. We will use the qualified term pseudo-transaction when we want to refer specifically to a unit of work that does not preserve all of the ACID properties, although it may preserve some of them. Pseudo-transactions exist for a variety of reasons including the difficulty of proper transaction design and enforcement, incomplete knowledge of consistency rules, attempts to increase concurrency at the expense of decreased isolation, attempts to increase performance at the expense of atomicity, and so on.

The ACID properties lead to a very specific behavior when one or more of the elements that compose a transaction fail in a manner that cannot be transparently recovered (a so-called “unrecoverable error”): the atomicity property demands that the state of the resources involved be restored so that it is as though no changes whatsoever had been made by the transaction. Thus, an unrecoverable error always results in transitioning to the initial state (i.e., the initial state being restored), the typical process for achieving this being known as “rollback.” An alternative method of restoring the initial state is to run an “undo” or “inverse” transformation known as a compensating transaction (discussed in more detail below). This of course presumes that for such mandated compensating transactions, for every error it is possible to first identify the class of error, then most suitable compensating transaction, and finally to implement that compensating transaction. A problem with the current approach to enforcing atomicity is that viable work is often wasted when the initial state is recovered. A second problem is that transactions dependent on a failed transaction cannot begin until the failed transaction is resubmitted and finally completes, thereby possibly resulting in excessive processing times and perhaps ultimately causing a failure to achieve the intended business purpose.

The consistency property guarantees the correctness of transactions by enforcing a set of consistency conditions on the final state of every transaction. Consistency conditions are usually computable, which means that a software test is often executed to determine whether or not a particular consistency condition is satisfied in the current state. Thus, a correctly written transaction becomes one which, when applied to resources in a first consistent state, transforms those resources into a second (possibly identical) consistent state. Intermediate states, created as the component operations of a transaction are applied to resources, may or may not satisfy a set of consistency conditions and so may or may not be a consistent state. A problem with this approach is that consistency must be either cumulative during the transaction, or else enforced at transaction completion. In most cases, transactions are assumed to be written correctly and the completion of a transaction is simply assumed to be sufficient to insure a consistent state. This leads to a further problem: the interactions among a collection of transactions that constitute a complex transaction may not result in a consistent state unless all consistency rules are enforced automatically at transaction completion.

For complex transactions that share resources, the isolation property further demands that concurrent or dependent transactions behave as though they were run in isolation (or were independent): that is, no other transaction can have seen any intermediate changes (there are no “side effects”) because these might be inconsistent. The usual approach to ensuring the isolation property is to lock any resource that is touched by the transaction, thereby ensuring that other transactions cannot modify any such resource (a share lock) and cannot access modified resources (an exclusive lock). With regard to resource management, locking is used to implement a form of dynamic scheduling. The most commonly used means for ensuring this is implementing the rule known as “two-phase locking” wherein while a transaction is processing, locks on resources accessed by that transaction are acquired during phase one and are released only during phase two, with no overlap in these phases. Such an implementation guarantees that concurrent or dependent transactions can be interleaved while preserving the isolation property. A problem with this approach is that it necessarily increases the processing time of concurrent transactions that need to access the same resources, since once a resource is locked, it may not be modified by any other transaction until the locking transaction has completed. Another problem due to this approach is that it occasionally creates a deadly embrace or deadlock condition among a group of transactions. In the simplest case of the group consisting of only two transactions, each of the two transactions wait indefinitely for a resource locked by the other. Deadlock conditions can arise in complex ways among groups of more than two transactions. Other approaches to maintaining the isolation property include optimistic concurrency (such as time stamping) and lock or conflict avoidance (such as static scheduling via transaction classes or conflict graphs, nested transactions, and multi-versioning). Various caching schemes have been designed to improve concurrency by minimizing the time required to access a resource, while respecting a particular approach to enforcing the isolation property. Each of the existing approaches to enforcing isolation, and the associated techniques and implications for resource management, fails to meet the needs imposed by complex, possibly distributed, business transactions.

If no error occurs, the completion of the transaction guarantees not only a consistent state, but also a durable one (the durability property) through a process known as “commit.” The step in a transaction at which a “commit” is processed is known as the commit point. The durability property is intended to guarantee that the specific result of a completed transaction can be recovered at a later time, and cannot be repudiated. Ordinarily, the durability property is interpreted as meaning that the final state of resources accessed by a transaction is, in effect, recorded in non-volatile storage before confirming the successful completion of the transaction. Usually, this is done by recording some combination of resource states, along with the operations that have been applied to the resources in question. The software that handles this recording is called a resource manager.

A variant of the commit point, in which a user (possibly via program code) asserts to the transaction manager that they wish to make the then current state recoverable and may subsequently wish to rollback work to that known state, is known as a savepoint. Because savepoints are arbitrarily defined, they need not represent a consistent state. Furthermore, the system will return to a specific savepoint only at the explicit request of the user. Typically, savepoints are not durable. Savepoints cannot be asserted automatically by the system except in the most rudimentary fashion as, for example, after every operation or periodically based on elapsed time or quantity of resources used. None of these approaches enable the system to determine to which savepoint it should rollback.

When the elements of a transaction are executed (whether concurrent or sequential) under multiple, independent resource managers, the rollback and commit processes can be coordinated so that the collection behaves as though it were a single transaction. In essence, the elements are implemented as transactions in their own right, but are logically coupled to maintain ACID properties to the desired degree for the collection overall. Such transactions are called distributed transactions. The usual method for achieving this coordination is called two-phase commit. Unfortunately, this is an inefficient process which tends to reduce concurrency and performance, and cannot guarantee coordination under all failure conditions. Under certain circumstances, a system failure during two-phase commit can result in a state that is incorrect and that then requires difficult, costly, and time-consuming manual correction during which the system is likely to be unavailable. As with single transactions, compensating transactions can sometimes be used to restore the initial state of a collection of logically coupled transactions. In such cases, it may be necessary to run special compensating transactions that apply to the entire collection of transactions (known as a compensation sphere whether or not the collection is a distributed transaction).

There are numerous optimizations and variations on these techniques, including split transactions, nested transactions, and the like. In practice, all these approaches have several disadvantages (and differ from the present invention):

Poor concurrency due to locking is common;

the cost of rollback, followed by redoing the transaction, can be excessive;

the conditions of consistency, isolation, and durability are tightly bound together;

logically dependent transactions must either (a) be run sequentially with the possibility that an intervening transaction will alter the final state of the first transaction before the second transaction can take over, or (b) be run together as a distributed transaction, thereby locking resources for a much longer time and introducing two-phase commit performance and concurrency penalties;

there is significant overhead in memory and processing costs on already complex transactions;

the errors which are encountered and identified are not recorded (which can complicate systematic improvement of a system);

it is often undesirable in a business scenario to return a set of resources to some prior state, especially when a partially ordered set of interdependent transactions (i.e., a business process) has been run;

it is not always possible to define a compensating transaction for a given transaction, and the best compensating transaction often depends on context;

business transactions may result in very long times from start to completion, and may involve many logically coupled transactions, possibly each running under separate transaction or resource managers; and, finally,

the transaction manager will not be able to compensate for or recover from certain context-dependent, external actions that affect resources external to the resource manager.

Transactions can be classified broadly into three types, with corresponding qualifiers or adjectives: physical, logical, and business. A physical transaction is a unit of recovery; that is, a group of related operations on a set of resources that can be recovered to an initial state as a unit. The beginning (and end) of a physical transaction is thus a point of recovery. A physical transaction should have the atomicity and durability properties. A logical transaction is a unit of consistency; that is, a group of related operations on a set of resources that together meet a set of consistency conditions and consisting of one or more coordinated physical transactions. The beginning (and end) of a logical transaction is a point of consistency. In principle, logical transactions should have the ACID properties. A business transaction is a unit of audit; that is, a group of related operations on a set of resources that together result in an auditable change and consisting of one or more coordinated transactions. If, as is the ideal construction, each of these component transactions are logical transactions, business transactions combine to form a predictable, well-behaved system. The beginning and end of a business transaction are thus audit points, by which we mean that an auditor can verify the transaction's identity and execution. Audit information obtained might include identifying the operations performed, in what order (to the degree it matters), by whom, when, with what resources, that precisely which possible decision alternatives were taken in compliance with which rules, and that the audit system was not circumvented. Business transactions can be composed of other business transactions. Time spans of a business transaction can be as short as microseconds or span decades (e.g., life insurance premium payments and eventual disbursement which must meet the consistency conditions imposed by law and policy).

The efficiency, correctness, and auditability of automated business transactions have a tremendous influence on a business' profitability. As transaction complexity increases, the impact of inefficiencies and errors increases combinatorially.

There are at least four general classes of ways that transactions can be complex. First, a transaction may involve a great deal of detail in its definition, each step of which may be either complex or simple, and may inherently require considerable time to process. Even if each individual step or operation is simple, the totality of the transaction may exceed the average human capacity to understand it in detail—for example, adding the total sum of money paid to a business on a given day, when the number of inputs are in the millions. This sort of complexity is inherently addressed (to the degree possible) by automation, and by following the well-known principles of good transaction design.

Second, a transaction may be distributed amongst multiple, separate environments, each such environment handling a sub-set of the total transaction. The set of resources may be divisible or necessarily shared, just as the processing may be either sequential or concurrent, and may be dependent or independent. Distributed transactions inherently impose complexity in maintaining the ACID properties and on error recovery.

Third, a transaction may be comprised of multiple, linked transactions—for example, adding all of the monies paid in together, adding all of the monies paid out together, and summing the two, to establish a daily net cashflow balance for a company. Such joined transactions may include as a sub-transaction any of the three complex transactions (including other joined transactions, in recursive iteration). And, of course, linked transactions may then be further joined, theoretically ad infinitum. Each sub transaction is addressed as its own transaction, and thus is handled using the same means and definitiveness. Linked transactions can become extremely complex due to the many ways they can be interdependent, thus making their design, maintenance, and error management costly and their use risky. Tremendous care must be taken to keep complexity under control.

Fourth, and last, a transaction may run concurrently in a mix of transactions (physical, logical, business, and pseudo). As the number of concurrent transactions, the number of inter-dependencies, or the speed of processing increase, or as the available resources decrease, the behavior of the transaction becomes more complex. Transaction managers, careful transaction design, and workload scheduling to avoid concurrency are among the methods that are used to manage this type of complexity, and provide only limited relief. Part of the problem is that the group behavior of the mix becomes increasingly unpredictable, and therefore unmanageable, with increasing complexity.

A business process may be understood as consisting of a set of partially-ordered inter-dependent or linked transactions (physical, logical, business, and pseudo), sometimes relatively simple and sometimes enormously complex, itself implementing a business transaction. The flow of a business process may branch or merge, can involve concurrent activities or transactions, and can involve either synchronous or asynchronous flows. Automated business process management is rapidly becoming the principal means of enabling business integration and business-to-business exchanges (e.g., supply chains and trading hubs).

Knowledge of both the internal logical structure of transactions and the inter-relationships among a group of transactions is often represented in terms of an inter-connected set of dependencies. Two types of dependency are important here: semantic and resource. If completion of an operation (or transaction) A is a necessary condition for the correct completion of some operation (or transaction) B, B is said to have semantic dependency on A. If completion of an operation (or transaction) T requires some resource R, transaction T is said to have a resource dependency on the resource R. Resource dependencies become extremely important to the efficiency of transaction processing, especially if the resource cannot be shared (that is, if a principle of mutual exclusion is either inherent or enforced). In such cases, transactions (or operations) that depend on the resource become serialized on that resource, and thus, transactions that require the resource depend on (and wait for) the completion of transaction that has the resource.

Dependencies are generally depicted via a directed graph, in which the nodes represent either transactions or resources and arrows represent the dependency relationship. The graph that represents transactions that wait for some resource held by another transaction, for example, is called a “wait graph.” Dependency graphs may be as simple as a dependency chain or even a dependency tree, or may be a very complex, and non-flat network.

The value of successfully managing complexity through automated means grows as the transactions being managed become more complex, as this uses computerization's principal strength: the capacity for managing tremendous amounts of detail, detail that would certainly overwhelm any single human worker, and threaten to overwhelm a human organization not equipped with computer tools.

Unfortunately, the cost of any error that may propagate, for example, down a dependency chain of simple transactions, or affect a net of distributed transactions, also increases. Moreover, the cost of identifying possible sources of error increases as the contextual background for a complex transaction broadens, as all elements, assumptions, and consequences of particular transition states that may be visited while the transaction is processing must be examined for error. One certainty is that the law of unintended consequences operates with harsh and potentially devastating impact on program designers and users who blithely assume that their processes will always operate exactly as they are intended, rather than exactly according to what they are told (and sometimes more telling, not told) to do.

Error-handling for complex transactions currently operates with a bias towards rescinding a flawed transaction and restoring the original starting state. Under this approach, only when a transaction has successfully and correctly completed is the computer program granted permission to commit itself to the results and permanently accept them. If an error occurs, then the transaction is rolled back to the starting point and the data and control restored. This “either commit or rollback” approach imposes a heavy overhead load on complex transaction processing. If the complex transaction is composed of a chain of single, simpler transactions, then the entire chain must be rolled back to the designated prior commit point. All of the work done between the prior commit point and the error is discarded, even though it may have been valid and correct. If the complex transaction is a distributed one, then all resources used or affected by the transaction must be tracked and blocked from other uses until a transaction has successfully attained the next commit point; and when a single part of the entire distributed transaction encounters an error, all parts (and the resources used) must be restored to the values established at the prior commit point. Again, the work that has been successfully performed, even that which is not affected by the error, must be discarded. With linked transactions or any mix involving possibly interdependent pseudo-transactions, no general solution to the problem of automated error recovery has heretofore been presented.

Furthermore, the standard approach treats all transactional operations as identical. Operations, however, differ as to their reversibility, particularly in computer operations. Addition of zero may be reversible by subtracting zero. But multiplication by zero, even though the result is boring, is not exactly reversible by division by zero. Non-commutable transactions are not differentiated from commutable ones, nor do they have more stringent controls placed around their inputs and operation.

A second method currently used for error-handling in complex transactions is the application, after an error, of a pre-established compensatory mechanism, also called (collectively) compensating transactions as noted above. This presumes that all errors experienced can be predetermined, fit into particular categories, and a proper method of correction devised for each category. Using compensating transactions introduces an inherent risk of unrecoverable error: compensating transaction may themselves fail. Dependence entirely on compensating transactions risks the imposition of a Procrustean solution on a correct transaction that has been mistakenly identified as erroneous, or even on an erroneous transaction where the correction asserted becomes worse than the error.

Inherent in the use of compensating transactions is an assumption that each individually defined transaction has a matching transaction (the “compensating transaction”) that will “undo” any work that the original transaction did. When transactions are treated in isolation or are applied sequentially, it is pretty easy to come up with compensating transactions. All that is needed is the state of the system saved from the beginning of the transaction and a function to restore that state. (In essence, this is how one recovers a file using a backup copy. All that is lost is the intermediate correct stages between preparation of the backup and the occurrence of the error.) When transactions become interleaved, this simplistic notion of a compensating transaction no longer works and the implementation a bit trickier. In fact, a compensating transaction may not even exist for certain transactions. The compensating transaction may be selected and applied automatically by the transaction manager. Still, the process is much the same: the system is ultimately returned to an earlier state or its equivalent.

Automated support for compensating transactions requires that, for each transaction, a corresponding compensating transaction be registered with an error management system so that recovery can take place automatically and consistently. The rules for using compensating transactions become more complex as the transaction model departs further from the familiar “flat” model. Formally, compensating transactions should always return a system to a prior state. If multiple systems are recovered, they are all recovered to prior states that share a common point in time. If the atomic actions that make up a transaction can be done in any order, and if each of these has an undo operation, then such a compensating transaction can always be defined. Three guidelines have been published (McGoveran, 2000): (1) Try to keep the overall transaction model as close as possible to the traditional “flat” model or else a simple hierarchy of strictly nested transactions. (2) Design the atomic actions so that order of application within a transaction does not matter. (3) Make certain that compensating transactions are applied in the right order.

A transaction logically consists of a begin transaction request, a set of steps or operations, each typically (though not necessarily) processed in sequential order of request and performing some manipulation of identified resources, and a transaction end request (which may be, for example, a commit, an abort, a rollback to named savepoint, and the like). Because the state of the art typically processes each step in the order received, the management of affected resources is largely cumulative rather than either pre-determined or predictive, even when the entire transaction is submitted at one time. Resource management, and in particular the scheduling of both concurrent transactions and the operations of which they are composed, may be either static or dynamic. Static scheduling uses various techniques such as conflict graphs to determine in advance of execution which transactions and operations may be interleaved or run concurrently. Dynamic scheduling uses various techniques such as locking protocols to determine at execution time which transactions and operations may be interleaved or run concurrently.

SUMMARY OF THE INVENTION

As outlined above, the usual interpretation of the ACID properties introduces a number of difficulties. The current interpretation of the atomicity property has resulted in an approach to error recovery that is costly in terms of both time and other resources in that it requires the ability to return affected resources to an initial state. The current interpretation of the consistency property recognizes consistent states only at explicit transaction boundaries, resulting in excessive processing at the end of a transaction and increased chance of failure. The isolation property is interpreted as strictly precluding the sharing of modified resources and operations, so that performance is affected and certain operations may be performed redundantly even when they are identical. Finally, the durability property is generally interpreted as requiring a hard record of only the final state of a transaction's resources (or its equivalent), thereby sometimes requiring excessive processing at commit or rollback. All of these taken together result in less than optimal use of resources and inefficient error recovery mechanisms. The traditional techniques for preserving the ACID properties, optimizing resource usage, and recovering from errors cannot be applied effectively in many business environments involving complex transactions, especially those pertaining to global electronic commerce and business process automation.

The current invention introduces a method of transaction processing, comprised of a set of sub-methods which preserve the ACID properties without being restricted by the traditional interpretations. The concept of atomicity is refined to mean that either all effects specific to a transaction will complete or they will all fail. The concept of consistency is refined to mean that whenever a class of consistency conditions apply to two states connected by a set of operations which are otherwise atomic, isolated, and durable as defined here, that set of operations constitute an implicit transaction. The isolation property is refined to mean that no two transactions produce a conflicting or contradictory effect on any resource on which they are mutually and concurrently (that is, during the time they are processed) dependent. The durability property is refined to mean that the final state of a transaction is recoverable insofar as that state has any effect on the consistency of the history of transactions as of the time of recovery. Thus, if the recovered state differs from the final state in any way, the durability property is a guarantee that all those differences are consistent with all other recovered states and external effects of the transaction history. Finally, a logical transaction is understood as a transition from one state in a class of consistent states to a state in another class of consistent states. This is similar to, but clearly distinct from, the concept that the interleaved operations of a set of serializable, concurrent transactions produces a final result that is identical to at least one serial execution of those transactions. Just as serializability provides no guarantee as to which apparent ordering of the transactions will result, so the new understanding of a logical transaction provides no guarantee as to which consistent state in the class of achievable states will result.

The present invention asserts that these refinements of the ACID properties and of logical transactions permit a more realistic computer representation of transaction processing, especially business transaction processing. Furthermore, these refinements permit transaction processing methods that include both the traditional methods and the sub-methods described in this invention. The new set of sub-methods used, both individually and together, make it possible to manage complex transactional environments, while optimizing the use of resources in various ways. These techniques extend to distributed transactions, and to business transactions which span both multiple individual transactions as, for example, in a business process, and multiple business entities as is required in electronic commerce and business-to-business exchanges.

In particular, these sub-methods include: (1) establishing and using consistency points which minimizes the cost of recovery under certain types of error; (2) transaction relaying which permits work sharing across otherwise isolated transactions, while simultaneously minimizing the impact of failures; (3) corrective transactions which permit error recovery without unnecessarily undoing work, without so-called compensating transactions, and while enabling the tracking and correlation of errors and their correction; (4) lookahead-based resource management based on dependencies which enables optimized resource usage within and among transactions; and, (5) dependency-based concurrency optimization which enables optimized scheduling and isolation of transactions while avoiding the high cost of locking and certain other concurrency protocols wherever possible. Each of these sub-methods is capable of being used in complex transaction environments (including distributed, linked, and mixed) while avoiding the overhead associated with traditional transaction management techniques such as two-phase commit, each can be used in combination with the others, and each of these are detailed in the description of the invention below.

Two of the sub-methods introduced here, consistency points and corrective transactions, address the problem of error recovery and correction. Consistency points differ from savepoints in that they add the requirement of a consistent state, possibly automatically detected and named. Corrective transactions differ from compensating transactions in that they effectively enfold both error repair and the correction, whereas compensating transactions address only error repair. One problem with the current approaches to handling errors that occur during complex or distributed transactions is that they fail almost as often as they succeed. A second problem is that they are difficult for the human individuals who experience both the problem and the correction, because they do not meet peoples' expectations of how the real world handles problems. A third problem is that they do not offer an opportunity to record both the error and the correction applied, which makes adaptive improvements harder to derive as much of the value of the experience (how the mistake was made and how it was corrected) is discarded after the correction is completed. A fourth problem is that they are relatively inefficient. Jointly, consistency points and corrective transactions overcome these problems.

The transaction relaying sub-method provides a means for efficient, consistent management of inter-dependent transactions without violating atomicity or isolation requirements, without introducing artificial transaction contexts, and while enabling resource sharing. Current approaches for linking inter-dependent transactions (through, for example, a single distributed transaction with two-phase commit, as chained transactions, or through asynchronous messaging) do not simultaneously insure ACID properties and efficient, manageable error recovery. One problem with current approaches is the high resource cost of ensuring consistency and atomicity (the later becoming a somewhat artificially expanded requirement). A second problem is the high cost of error recovery, inasmuch as the approach introduces difficult to manage failure modes, most of which are incompatible with the sub-method of corrective transactions introduced here. A third problem is that the approach, in an attempt to avoid the high overhead of distributed transactions, may permit inconsistencies. A fourth problem is that they may be compatible only with flat transaction models, while required business transactions and business processes cannot be implemented using a flat transaction model. Transaction relaying overcomes these problems.

The remaining two sub-methods, lookahead-based resource management, and dependency-based concurrency optimization, each enable efficient use of resources, especially in highly concurrent environments. One problem with current approaches is that they do not make good use of information known in advance of transaction or operation execution, but depend primarily on dynamic techniques with the result that hand-coded solutions may perform more efficiently. A second problem is that they may not be compatible with the method (or the individual sub-methods) introduced here, hence an alternative approach to resource management and concurrency optimization is required to make the other new sub-methods viable. Lookahead-based resource management and dependency-based concurrency optimization address these problems.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a transaction state graph contrasting transaction processing error recovery, with and without consistency points.

FIG. 2 is a transaction state graph illustrating a corrective transaction.

FIG. 3 is a transaction state graph illustrating transaction relaying.

In FIG. 1-3, the thicker lines indicate the intended, error-free flow of work, while the thinner lines indicate corrective or ameliorative efforts once an error occurs.

FIG. 4 is an example of code reorganization and optimization using lookahead resource management.

FIG. 5 is a transaction state graph illustrating an example (one possible alternative out of many) of dependency-based concurrency control.

FIG. 6 is an overview of a component combination for the joint application of the submethods, implemented in an ATM.

DETAILED DESCRIPTION OF THE DRAWINGS

FIG. 1: At time t1 (1), a transaction is begun and the current state is effectively saved. A portion of work is done between t1 (1) and t2 (2) and another portion of work is done between t2 (2) and t3 (3). At time t4 (4) and before the transaction can reach its intended completion state (5) an error is detected. Without consistency points, the ATM initiates a rollback (7) and restores the initial state (1) at time t5, effectively losing all the work done prior to time t4 (4). The entire transaction must now be redone.

By contrast, if the transaction manager detects and saves a consistency point at time t3 (3), the ATM initiates a lesser rollback (6) and restores the saved consistency point (3) at time t5. The work done between t1 (1) and t3 (3) is preserved, and only the work done after time t3 (3) and prior to time t4 (4) is lost and must be redone.

FIG. 2: Transaction A begins at consistency point CP0 (8), transitioning state through consistency points CP1 (9) and CP2 (10); then Transaction A commits and Transaction B begins. Transaction B encounters an undesirable condition E1 (11) before it can transition to consistency point CP3 (12) and commit. The ATM determines that condition E1 (11) is associated with consistency points of category C1, and that only CP1 (9) of prior consistency points CP0, CP1, and CP2 belongs to category C1. The ATM then restores the state to consistency point C1 (9). It further determines that reachable consistency points CP3 (12) and CP6 (13) belong to the same consistency category C2 while consistency point CP5 (14) belongs to consistency category C3. Transaction C is then executed as a corrective transaction, transitioning state from consistency point CP1 (9) to consistency point CP4 (15), and then Transaction D is executed transitioning state from consistency point CP4 (15) to consistency point CP6 (13)—an acceptable state—where it commits. A second alternative would have been to execute Transaction C as a corrective transaction, transitioning state from consistency point CP1 (9) to consistency point CP4 (15) and then execute Transaction E transitioning state from consistency point CP4 (15) to consistency point CP5 (14)—another acceptable state—where it commits.

FIG. 3: Transaction A begins to use resource sets RS0 (17) and RS1 (18), which are both in a consistent state, at consistency point CP1 (19). Both transition to durable consistency point CP3 (20), at which point Transaction A notifies the ATM that it will not subsequently modify RS1. Transaction B begins with resource set RS2 (21) in a consistent state at consistency point CP2 (22) and transitions it to consistency point CP4 (23). At CP4 Transaction B notifies the ATM that it requires resource set RS1 to continue. The ATM transfers (24) both control and the state of resource set RS1 at CP3 (20) from Transaction A to Transaction B at consistency point CP4 (23). If no errors occur subsequently, Transaction A continues, modifying resource set RS0, transitioning its state from consistency point CP3 (20) to consistency point CP5 (25) and commits. Likewise, Transaction B continues in the absence of subsequent errors, modifying resource sets RS1 and RS0, transitioning from consistency point CP4 (23) to consistency point CP6 (26) and commits.

If an undesirable condition E1 (27) occurs in Transaction A subsequent to consistency condition CP3 (20) and prior to commit, and after Transaction B has committed or is in-flight, the ATM simply restores (28) resource set RS0 to consistency condition CP1 (19). If Transaction B has aborted, the ATM also restores resource set RS1 (18) to consistency condition CP1 (19). (It is also possible to restore to consistency condition CP3 (20) and re-run the work that affects only RS0; although this is not shown in the diagram.)

If an undesirable condition E2 (30) occurs in Transaction B subsequent to consistency condition CP4 (23) and prior to commit, and Transaction A has committed or is in-flight, the ATM restores (31) resource set RS2 to consistency condition CP2 (22) and restores (32) resource set RS1 to consistency condition CP4 (23). If Transaction A is in-flight, the ATM also transfers (33) control of resource set RS1 (80) to the Transaction A context (18). If Transaction A has aborted, it further restores resource set RS1 (18) to consistency condition CP1 (19). (Again, it is also possible to restore to consistency condition CP4 (23) and rerun the work that affects both resource sets RS1 (80) and RS2 (21), without handing control over RS1 (80) back to the Transaction A context, although this is not shown in the diagram.)

FIG. 4: The ATM analyzes and rewrites Transaction D from the Initial Definition (on the left hand side) to the re-structured Enhanced Definition (on the right hand side). Directives are inserted regarding favoring (34) (35) and (36), to assert consistency points (37)(38), and to deallocate resources (39)(40)(41). The “Read Z” step is performed earlier (42), thereby optimizing efficiency. The “Write Y=Y+ΔX” step is also performed earlier (43), thereby enabling both interim assertion of consistency points (37)(38) and the early deallocation, after its last use in the transaction, of each resource (39)(40)(41).

FIG. 5: This shows the scheduling of four concurrent transactions E, F, G, and H. The ATM determines from dependency information that Transaction E consists of consistency groups CG1 (44), CG2 (45), CG3 (46), and CG4 (47), that Transaction F consists of consistency groups CG5 (48) and CG6 (49), that Transaction G consists of consistency groups CG7 (50), CG8 (51), and CG9 (52), and that Transaction H consists of a single consistency group CG10 (53). It further determines that CG6 (49) shares a dependency with consistency groups CG1 (44), CG3 (46), and CG4 (47), CG9 (52) shares a dependency with consistency group CG1 (44), and that there are no other dependencies among the transactions. Transaction H is not in the same conflict class as E, F, or G. Given this information, the ATM begins Transactions E, F, and H at time t0 (54), scheduling consistency groups CG1 (44), CG5 (48), and CG10 (53) for immediate and concurrent execution. At time t1 (55) after consistency group CG1 (44) completes, it schedules consistency groups CG2 (45), CG3 (46), CG4 (47), and CG7 (50) to run concurrently. At time t2 (56) after consistency groups CG2 (45), CG3 (46), and CG4 (47) have completed, Transaction E commits. After consistency group CG7 (50) of Transaction G completes at time t3 (57), consistency group CG8 (51) is scheduled to run. Also at time t3 (57) after Transaction E has committed, consistency group CG6 (49) of Transaction F is scheduled to run; and then at time t4 (58) the ATM schedules consistency group CG9 (52) to run. (If Transaction E has already committed, the ATM can schedule consistency groups CG8 (51) and CG9 (52) of Transaction G to run concurrently, although this is not shown in the diagram.) Because Transaction H cannot possibly be in conflict with Transactions E, F, and G, it is permitted to run to completion without further scheduling and without isolation otherwise enforced. At some time t5 (59) all the transactions will have completed and committed.

FIG. 6: The ATM, in the preferred embodiment, contains all of the subunits referenced in this diagram. Due to the complexity of potential interconnectivity, which may be dynamically rearranged, it is infeasible to display all possible interconnections and hierarchies.

The Parser (60) has responsibility for interpreting or compiling transaction definitions, which it may receive from an external source or by reference to a transaction definition stored in the Repository (71) via the Repository Manager (61). The Parser may forward interpreted or compiled transaction definitions to the Repository Manager (61) for deferred execution or to the Execution Manager (62) for immediate execution. The Execution Manager (62) processes transactions, allocating and deallocating transaction contexts, passing directives and instructions to the appropriate ATM components, and orchestrating transaction scheduling, commit, rollback, and roliforward. The Consistency Manager (63) has responsibility for automatic identification of consistency points and verification of asserted consistency points. The Correction Processor (64) has responsibility for correlating abnormal conditions and consistency points, either by direct association, or through condition categories or consistency classes. Based on the transaction definition and possibly a business process definition, it may use various techniques to discover, optimally select, or create a corrective transaction and submit it to the Execution Manager (62). The Dependency Manager (65) has responsibility for interpreting dependency directives, detecting dependencies, identifying consistent groups based on dependencies and asserting the corresponding consistency points. The Restructuring Processor (66) has responsibility for altering the order of transaction steps based on information from the Repository (71), the Consistency Manager (63), and the Dependency Manager (65). The Repository (71) is also responsible for including internally derived resource management and consistency directives in the transaction definition. The Resource Manager (67) is responsible for accessing and updating resources, allocation management, scheduling, resource isolation, maintaining cache, and other resource constraints. The Resource Manager (67) is also responsible for detecting resource requirements, implementing resource management directives, and providing resource management directives to the Restructuring Processor (66). The Repository Manager (61) is responsible for coordinating all stored information, including dependencies, transaction definitions, associations, condition classes, consistency categories, subscriptions, and so on. The Publication/Subscription Manager (68) is responsible for processing publication and subscription definitions, detecting publication events, and notifying appropriate subscribers of publication events. The Recovery Manager (70) is responsible for evaluating, selecting, and directing recovery options, passing control to the Corrections Processor (64) if a corrective transaction is selected. The Isolation Manager (69) interacts with the Resource Manager (67) and more intensively the Resource Scheduler (72) to ensure the Isolation Property for every resource and transaction is correctly maintained, sending constraints and dependency information as needed to the Publication/Subscription Manager (68) and the Dependency Manager (65).

DETAILED DESCRIPTION OF THE INVENTION

Businesses work in an imperfect world, and attempt to impose their own order on events. Constantly in a state of flux, they persist in imposing ‘acceptable’ states through the efforts of all their employees, from the accountants running yearly, quarterly, weekly, or even daily accounts, to the zealous (or indifferent) stock clerks managing inventory.

When an error occurs, it is recognized because the result differs from what is expected. Results can differ from expectations in several ways, including computational results, resources consumed, catastrophic failures to complete the work, excessive time to complete the work, and so on. Typically, the business does not know either the explicit cause of an error or its full impact. For example, it may not know if data was corrupted (wrong account number), the procedure mistakenly performed (9*6=42), or the wrong procedure used (multiplied instead of divided). Obviously errors (including those of timeliness and resource overuse) must be prevented to the degree possible. Any undesirable effects of errors must be repaired and the desired effects asserted (correction—traditionally by resubmitting the corrected transaction). Furthermore, finding out which error occurred, and enabling those errors to be tracked, over time becomes more valuable than merely repairing and correcting each as it occurs. In this way the business can discover where it needs to focus attention on improving the overall process and improving its efficiency.

Overview of the Invention

The present invention is a method, consisting of a coordinated set of sub-methods, which enables efficient transaction processing and error management. By contrast with prior approaches, it is extensible to complex transactions and distributed business environments, and is particularly well-suited to business process management. The sub-methods are consistency points, corrective transactions, transaction relaying, lookahead-based resource management, and dependency-based concurrency optimization.

In the preferred embodiment of the present invention, a system implementing this invention (1) continually transitions between automatically-detected stable (i.e. logically correct and permissibly durable) acceptable states (each is also known as a ‘consistency point’), ensuring rapid and minimal recovery efforts for any error; (2) automatically enables inter-linked, possibly distributed, transactions to share intermediate work results at ‘consistency points’ through transaction relaying, moving from one acceptable state to the next; (3) efficiently manages I/O and storage use by identifying for each transaction (or procedure), in advance of execution, a set of data, resources, and operations depended upon by that transaction to move from one consistency point to its succeeding consistency point; (4) schedules the use of those resources in such a manner as to improve efficiency and concurrency while permitting dynamic scheduling of unplanned transactions; and (5) automatically implements repair and corrective efforts whenever a mistake is identified.

In an extension of the preferred embodiment, the system shares resources and data that are touched or handled by multiple subordinate parts of a complex or distributed transaction, rather than duplicating the same and letting each part have its own copy, or rather than locking all other parts out while each particular part operates with that same data and/or resources. This ‘overlap’ in effect becomes a window into the entire business' processes, a window that moves as transactions, or parts thereof, successfully and correctly complete—or when an error occurs, the effects are repaired, and failed work corrected. Moreover, all that needs to be maintained during the process of a particular sub-part of the transaction is the ‘delta save’, that is, the changes since the known consistency point which the chain last reached.

In yet a further extension, a system engages in transaction management, by implementing transaction lookahead, or managing transaction dependencies, or any combination thereof.

Each of the sub-methods are further detailed and explicated below.

1. Consistency Points

Through the course of a transaction, it may happen that the set of resources enters a consistent state from time to time. Such a consistent state is referred to as a consistency point and may be detected automatically by the transaction manager or some other software subsystem, or may be manually asserted by the user (possibly via program code or interactive commands). Numerous methods for automatic detection of consistency exist in the literature and are well-known. Consistency points may be durable or non-durable. Durability determines the circumstances under which they may be used. In effect, a consistency point is a savepoint with the added requirement of consistency and the optional property of durability. When the system detects a potentially recoverable error, it can rollback to the consistency point by restoring the state as of the consistency point (exactly as it might to a synchronization point or were a savepoint to have been asserted). It may then optionally and automatically redo the work that was subsequently done (by, for example, reading the log or log buffers) in the hope that the error will not recur. This might be the case when, for example, (1) a deadlock is encountered (in which case the consistency point need not be durable) or (2) power fails (in which case the consistency point must be durable). Numerous methods exist for recovery to a synchronization point or savepoint, and are well-known. Rollback to a consistency point will, in general, be more efficient than rollback to the beginning of a transaction in a system which does not support consistency points.

These examples illustrate some of the value of consistency points:

    • automatic deadlock recovery—When a deadlock is detected, the usual response is to return control to the user (or program) with an error message or to select one of the participating transactions and abort it. With consistency points, the system can implement an internal retry loop which makes it very likely that the deadlock condition will not recur (for a variety of reasons). Such an internal retry loop is much more efficient than one implemented by the user (the usual approach to deadlock recovery). It is clearly more efficient than having the system automatically break deadlocks by the method of picking a “victim” of those transactions involved, and forcing it to fail, and more reliable than expecting the correct response to have been encoded into a program by a programmer.
    • automated savepoints—Savepoints are established by manual declaration of the user, either interactively or through a program, and as an added step in a transaction. By contrast, consistency points can be established by automatic detection that some particular set of one or more pre-defined consistency conditions have been met. This enables both automatic and manual rollback to the most recent consistency point.
    • categories of consistency points—Users (including business users, system designers and administrators) can define multiple sets of consistency conditions so that multiple, different categories of states, each consistent with respect to a particular set of consistency conditions, can be detected and named. Detection can be automatic and naming can be according to a pre-defined naming convention. A consistency point of category C1 is more general than a consistency point of category C2 if every consistency point of category C2 also belongs to category C1. Other rules of set theory apply and can be used to simply testing for consistency points of one or more categories using methods well-known to one familiar with the art.
    • categorized rollback—By establishing a relationship between a type or class of error (based, for example, on error code) or other detectable condition, and a category of consistency point (possibly based on name), the system can then rollback a transaction to an associated category of consistency point when that error is detected. If the associated category of consistency point has not been detected or asserted, traditional error handling techniques can be used. Because both the relationship between error type and category of consistency point, and the consistency conditions to be detection can be changed, the behavior of the system can be easily maintained. In one embodiment, this can be done without the necessity of modifying transaction processing programs since the relationship and the consistency conditions can be held in a database (for example) and determined at program execution time.
    • commit processing—When a transaction commits, the standard approach is to make the final state of all affected resources durable. If a transaction contains one or more durable consistency points, the state of resources that have not been modified since a consistency point involving those resources need not be made durable during commit processing. This, in effect, permits commit processing to be spread out over time and possibly using parallel processing, thereby eliminating hotspots and speeding commit processing.
    • power failure recovery—When power fails, the usual response is to enter system recovery processing once it has been restored. The canonical approach to system restart of transaction management systems is equivalent to first initiating rollback of each transaction uncommitted at the time of power failure, and then to initiate rollforward. If the rollback phase for uncommitted transactions is to the most recent consistency point, followed by notification to the user as to “where they were” according to system records, the amount of work that the system must do in order to restart and which the user must then redo, is substantially decreased. A similar approach can be used for recovery from certain other types of failure, such as storage media failures, and incorporating other standard recovery mechanisms as appropriate.

Unlike all prior art, the present invention's use of consistency is far more consistent, logical, and powerful. Most present-day DBMS products (e.g., IBM's DB2 or Oracle's Oracle 9i) implement only an extremely limited concept of consistency enforcement, generally known as integrity rule or constraint enforcement. However, while these products may verify that the changes made by a transaction are consistent with some subset of the known integrity rules at various times (e.g., after each row is modified, after a specific transaction step is processed, or before transaction commit), no product currently on the market establishes and uses internally valid and logically consistent “checkpoints” (i.e. consistency points) to which the transaction can recover (perhaps automatically). Nor do they permit the user to request the establishment of consistency points, to assert consistency points (except implicitly and often erroneously at the end of a transaction), or separate consistency points from synchronization points (as, for example, between volatile memory and durable storage). Other advantages and uses of consistency points are further detailed below as they interact with other elements of this invention.

By extension, the method of consistency points can be applied to pseudo-transactions, physical transactions, logical transactions, and business transactions.

2. Transaction Relaying

Transaction relaying refers to the method of moving the responsibility for resource isolation and consistency in a window from transaction to transaction, much like the baton in the relay race, and permitting sharing of that responsibility under certain conditions (explained below). By further analogy, and for the purpose of explaining transaction relaying in its most simplified form, two transactions A and B become like runners in a relay race (football game). The baton (football) is a resource that A must pass to B without dropping (corruption). A conflicting transaction C is like a member of the competing team that would like to acquire control of the baton (football) from A and B. By passing the baton without either runner slowing down (permitting B to gain access to the resource held by A prior to commit), there is no opportunity for the competing team to acquire control (for conflicting transaction C to gain control of, let alone alter the resource). Furthermore, the entire process is much more efficient than if the runners were to stop in order to make the transfer.

Consider a transaction B having either a semantic or resource dependency (or both) on transaction A. For example, suppose that a particular business process consists of transactions A and B, and that there is an integrity rule or constraint, or a dependency that requires transaction B must always follow A because it relies upon the work done by A. In other words, some portion of the final state of resources affected by A (the output of A) is used as the initial state of resources required by B (the input of B). By the definitions of transaction and consistency point, the final state of A is a consistency point, even before A commits. Under the usual approaches we must either (1) accept the possibility that the final state of A is altered by some transaction C before B can access and lock the required resources (the sequential transaction scenario), (2) accept the possibility that the state of resources needed by B is different than the state of those same resources as perceived by some other transaction (chained transactions), or (3) run transactions A and B combined in a distributed transaction, accepting the fact that all resources touched by either A or B will be locked until B completes (the distributed transaction scenario).

Transaction relaying recognizes the fact that A and B may share the state of the resources that B requires at least as soon as A enters the final consistency point for those stated resources and has made that final state durable (assuming durability is required). Unlike chained transactions, it need not wait until A is ready to commit. It need not even wait until locks are released. Rather, the transaction manager, lock manager, or some other piece of relevant software either transfers ownership of those locks directly to B or establishes shared ownership with B (as long as only one transaction has ownership of exclusive locks on a resource at any given time if the ACID properties are desired), and never releases them for possible acquisition by C. Unlike the sequential transaction scenario, there is no possibility that C will interfere in the execution of B. Unlike the chained transaction scenario, transaction relaying does not require transaction A to have committed, the beginning of transaction B to be immediately after the commit of transaction A, the commit of A and begin of B to be atomically combined in a special operation (indeed, B may already have performed work on other resources), transactions A and B to be strictly sequential, or transaction B to be the only transaction that subsumes shared responsibility for resources previously operated on by transaction A. Unlike the distributed transaction scenario, resources held by A, but upon which the initial state of B does not depend, are released as soon as A completes and there is no two-phase commit overhead. Unlike split transactions, transaction relaying does not introduce artificial transaction contexts, can be fully automated without sacrificing consistency, and yet enables collaborative transaction processing in which work groups can communicate about the status and intermediate results of their work (including negative results).

An extension of the method is to permit transaction B to have done additional work on other resources prior to the consistency point discussed above. Another extension of the method is to permit A to do work on other resources after the consistency point discussed above. A further extension of the method is to permit transaction A to do work after the consistency point discussed above, so long as no consistent state on which transaction B depends is ultimately altered by transaction A.

Yet another extension of the method is to permit transactions other than transaction B to have a similar relationship to transaction A, involving possibly different resources and possibly different consistency points. The method preserves the ACID properties of all transactions as long as no more than one transaction in effect has responsibility for modification of a shared resource at any particular time, and that transaction can rollback the state of those resources to the most recent durable consistency point in which they are involved. If durability is not a recovery requirement (as, for example, during deadlock recovery), then the consistency point need not be durable.

By extension, under transaction relaying, if the initial state of a resource as needed by one or more transactions including B happens to be an intermediate state of that resource produced by A, it may be made available to those transactions long before A commits if the following conditions are true (other conditions may enable this as well): (1) at most one transaction of those sharing responsibility for recoverability, isolation, and consistency of resource modifies those resources subsequently, (2) the intermediate state is a consistency point, and (3) the intermediate state is recoverable (though not necessarily durable). These conditions are intended to guarantee that the result of A and B with transaction relaying around a consistency point is equivalent to some serializable interleaving of transactions D, E, F, and G, where D is the work that is A does before the consistency point, E is the work A does afterward, F is the work B does before the consistency point, and G is the work B does after the consistency point. Other sets of conditions or rules that would produce this result are possible.

Moreover, the intermediate state produced by A could just as easily have been produced by B (or other specific transactions) had the instructions to do so been inserted in B (or those other transactions) at some point prior to that at which the intermediate state of A is accessed by B. Transaction restructuring such as this under transaction relaying may be used to improve processing efficiency and performance. By further extension, under transaction relaying a group of transactions can share multiple intermediate states. This may become important when scheduling subordinate parts of a complex transaction for the most efficient processing; transaction relaying allows a transaction management facility to balance work amongst ‘subordinate’ transactions by including instructions such as those described in all subordinate transactions (or at least establishing the means for such inclusion when needed) and then selecting which of those subordinate transactions actually perform the work so as to promote efficiency, either in advance of execution or dynamically during execution.

In transaction relaying, both A and B share control over isolation of shared resources. For example, they would share ownership of the locks on the shared resources is locking were used to control isolation. Optimally, and in order to preserve the consistency and isolation properties, both A and B must have completed before transactions other than A and B perceive locks on those resources to have been released. If B completes before A, B relinquishes its lock ownership and A retains lock ownership until A completes. If A complete before B, A relinquishes its lock ownership and B retains lock ownership until B completes. In this way, both A and B (all owners of the shared resource) must release locks on shared resources in a manner consistent with the type of lock held (e.g., share versus exclusive locks) and the concurrency control mechanism before other transactions can access the resource. If A completes before B, B has lock ownership. If A and B complete simultaneously, or whenever A and B have both completed, lock ownership reverts to the resource manager and so locks are effectively released. In order to preserve serializability, the two-phase locking protocol applies to the shared resource as if a single transaction were involved. The usual rules of lock promotion or demotion apply. Insofar as external transactions (that is, transactions not involved in sharing the resources in question via transaction relaying) are concerned, a resource shared by A and B is locked in the manner which is most exclusive of the types of access requested by A and B. Similar rules may apply to lock scope escalation (e.g., row to page) and to transaction relaying involving more than two transactions.

By obvious extension, transaction relaying can be used in systems that employ non-standard concurrency control schemes and enforce isolation through mechanisms other than locking; appropriate adjustment to the specific mechanism that enforces isolation is then required to permit the sharing of resources at consistency points.

By extension, transaction relaying enables a transaction management facility (or other appropriate software systems) to remove redundant operations performed by a group of transactions and assign those operations to a specific transaction or transactions, thereby improving the overall efficiency of the system. Such a facility can determine which operations among a group of transactions are redundant through automatic means well-known to those familiar with the art (for example, pattern matching), to be informed of those redundant operations by some other agent such as a human individual knowledgeable about the intent of the transactions in the group, or some combination of the two.

Transaction relaying can be extended to arbitrarily complex collections of concurrent and interdependent transactions, even if those transactions were running under distinct transaction managers in a distributed computing environment. In such cases, the means for isolation enforcement will typically be distributed, but two-phase commit processing is not required across those transactions involved in transaction relaying (although it need not be precluded). Numerous mechanisms for distributed isolation enforcement exist and will be well known to one familiar with the art. Indeed, once the method of transaction relaying has been explained as it applies to two transactions (“A” and “B”), extensions to arbitrarily complex collections of concurrent and interdependent transactions, including those spread across a distributed computing environment however geographically dispersed or however many business entities may be involved, will be obvious to one trained, competent, and versed in the art.

By extension, this method of the present invention can be implemented so that transactions publish their states and/or consistency conditions at consistency points and permit other transactions to subscribe to the state of associated resources. A variety of methods may be used to determine which of the subscribing transactions will gain write permission over the associated resources and in what order. By further extension, the group of subscribing transactions can be treated to various methods of concurrency optimization, including the method of dependency based concurrency optimization described below. By extension, the method of consistency points can be applied to pseudo-transactions, physical transactions, logical transactions, and business transactions.

In another extension of the present invention, a locking flag is used to denote the dependency upon each particular resource (including data elements), and to transfer control over and responsibility for such to the transaction which has yet to attain a consistent state with the same, thereby allowing intermediate, partial, or distributed transactions to process and reach completion or acceptable states without necessitating the entirety of a complex or distributed transaction to successfully conclude.

3. Corrective Transactions

Corrective transactions provide an alternative to both compensation and rollback in circumstances in which the desired result of a transaction can be understood as producing a state that meets a particular set of consistency conditions. For example, an ATM transfer transaction may have as its key consistency conditions the crediting of a specific account by a specific amount of money, and maintaining a balance of debits and credits across a set of accounts (including the specified one).

In the event that an error occurs during transaction processing, a corrective transaction appropriate to the error is invoked. Rather than restoring the initial state of a set of resources as would either a rollback or a compensating transaction, a corrective transaction transforms or transitions the state of the affected set of resources to a final state which satisfies an alternative set of consistency conditions (integrity constraints and transition constraints). The alternative set of consistency conditions constrain the final state to one of possibly many acceptable states and may be, for example, completely distinct from the initial set or may be a more general category of consistency conditions. For example, consider a simple business process consisting of a two predefined but parameterized transactions, a funds-transfer transaction (parameterized for transfer amount and two account numbers) and a loan transaction (parameterized for loan amount but with fixed account number). If an attempt to transfer a specified amount between two accounts fails because of insufficient funds, an automatic corrective transaction might loan the user the required funds, thereby expanding the consistency conditions to include an account not owned by the user with respect to balancing credits and debits. In this example, the corrective transaction might be manually predefined by the bank and caused to run as part of an error handling routine. Similarly, rather than debiting the explicitly specified account (for example, checking), it might debit an alternate account (for example, savings or an investment account).

This method of the present invention replaces the usual fixed set of consistency conditions with a category of such sets and invokes an auxiliary set of actions (the corrective transaction) that will transform the current state into one that satisfies some set of consistency conditions belonging to that category. That is, the traditional concept of the consistency property for transactions is refined such that the options for achieving a consistent state in the completion of a transaction are broadened. For each set of consistency conditions defining the end state of a transaction, each of the other sets of consistency conditions belonging to its category constitute an acceptable set of consistency conditions. This concept of acceptable sets of consistency conditions mimics the real world of business, in which errors are common and a strictly pre-determined result of work is not possible. Rather, those who perform work in a business context strive to achieve some acceptable result, where acceptability is determined by satisfaction of a number of alternative sets of constraining conditions and is often associated with business risk and opportunity assessment.

This method is particularly valuable when a set of linked interdependent transactions is involved and a flat transaction model does not apply. For example, a classic problem of this nature involves the scheduling and booking of a travel itinerary. It is not uncommon that the ideal routing, carrier, and timing are unavailable for every segment of a multi-segment itinerary, but that some compromise alternative is available. Each segment is often reserved and booked via a separate transaction, and cancellation penalties after more than a few minutes may preclude arbitrary rescheduling. Possible compromises constitute alternative consistency conditions, possibly ranked by the traveler's preference. If a transaction to book a particular segment of the itinerary fails, a corrective transaction can book an alternative for that segment. For example, it might involve booking a flight to an airport near the original segment destination and a rental car with the attendant compromise of less time between flights. Similarly, a corrective transaction might cancel a certain number of already scheduled segments in order to assert a more viable alternative schedule. The segments to be cancelled might be selected, for example, based on minimizing any negative financial impact on the overall cost of the itinerary.

Business processes do not always lend themselves to such simple models as those assumed by existing approaches to transaction processing: often they involve interleaved multi-hierarchies and networks. The processes a business uses to correct for errors do not always return the business to a prior state as is assumed in other approaches to transaction error handling (it would be to costly to do so). Rather, the business is transitioned to some acceptable state and the nature of this state made available to those portions of the business that have some dependence upon it. Notice the repeated reference to “some acceptable state” instead of the more familiar technical notion of a specific internally consistent transaction end state. Obviously, businesses do not follow a rigid set of rules of consistency as a database might. However, it should be equally obvious that some action will be taken if the business is not in an acceptable state. Rather than ignoring this approach, depending entirely on manual corrections (difficult if not impossible at today's transaction volumes), or insisting that the map must be the territory, the present invention actively attacks the problem by defining consistent and acceptable states to which the business process will move when it becomes flawed, states from which it may resume normal transaction management once again.

In a business process, the various constituent and linked transactions (including pseudo-transactions) often create a complex network of steps with many decision branches and concurrent sub-processes. Many portions of the process are designed to handle exception or error conditions. If a transaction fails, then rollback and redo, or rollback of a transaction that includes a decision branch, may not be a reasonable option. In particular, such a recovery mechanism will often consume so much time or other resources that the business process is no longer viable. The method of corrective transactions requires that one identify a state that would have been reachable had a different portion of the process been activated (that is, a different branch had been taken), and that satisfies an acceptable set of consistency conditions. Each such state is designated as an alternative end state. The failed transaction is then rolled back to the most recent state for which a transaction or set of linked transactions (the corrective transaction) exist that will transition from the consistency point to an alternative end state. This point may be the current error state (and possibly inconsistent), or it may be the most recent consistency point. The corrective transaction is then run.

The method of corrective transactions requires that each business, logical, or physical transaction submitted to the system, and which is to be subject to the benefits of the method, be identified according to the consistency conditions that will be enforced on the set of resources affected by that transaction or that such consistency conditions be automatically discoverable by the system. Such consistency conditions might, for example, be stored conveniently in an online repository so as to be accessible to the transaction manager, other appropriate software, or a human individual. Whenever an error occurs that results in the failure of the transaction (thereby failing to establish a state among the preferred final states), the failed transaction is returned to a recoverable consistency point (the most recent one in the preferred embodiment). The error is classified (in the preferred embodiment according to the nature of the most recent consistency point) and the corresponding set of consistency conditions on the affected resources is established. A transaction (the corrective transaction) is then invoked which will transform the affected resources from the state of the most recent consistency point to a state that most closely approximates the intended state and satisfying the new consistency conditions (we refer to these as “acceptable conditions”), assuming that such a transaction exists. In the event that no such corrective transaction exists, the failed transaction is then returned to an even earlier consistency point, and an appropriate corrective transaction invoked. The process is repeated until an acceptable set of consistency conditions is reached. By extension, this iterative process might be replaced by other techniques which achieve an equivalent result, examples of which are described below.

In one embodiment, the establishment of a target set of acceptable conditions is determined automatically, for example by means as diverse as rule-based inference based on error class, the use of a theorem prover to determine conditions which will permit the transaction to complete, or a catalog lookup. In another embodiment, the establishment of acceptable conditions (or equivalently a transaction that will produce those conditions) is determined by an interaction with a suitably authorized person. One familiar with the art could easily specify numerous other means to determine the acceptable conditions based on a combination of class of error, recoverable consistency points within the failed transaction, and consistent states accessible by executing one or more transactions.

In one embodiment, the determination of the steps in the corrective transaction (that is, its definition) are fixed in advance and there is one such transaction for each class of error. In another embodiment, the steps which constitute the corrective transaction (which themselves might be either implicit or explicit transactions) are determined automatically using, for example, a theorem-prover which reasons from the consistency point (initial state as axioms) to a final state which meets the acceptable conditions, the steps of the proof being the steps in the corrective transaction. In an alternative embodiment, back chaining is used to start from an arbitrary, potential state that meets the acceptable conditions and as defined, for example, as part of an overall business process schema, incorporating steps from a pool of pre-defined steps, operations, or transactions until the state given as the consistency point was reached. The incorporated steps in reverse order of discovery then define the steps of the corrective transaction. In such an embodiment, both the failed transaction and the corrective transaction might be business transactions consisting of ordered activities or transactions, thus each being portions of a business process, possibly involving human interaction to accomplish business activities.

In another embodiment the selection of acceptable conditions, acceptable state, and sequence of steps that constitute the corrective transaction may be optimized using one or more of a variety of optimization techniques (these will be well-known to those familiar with the art) to meet given optimization goals. For example, the optimization goals might optimize for minimum resource usage, shortest execution time, least human interaction required, and the like. Similarly, the members of the set of acceptable conditions may be possibly prioritized or ordered based on some arbitrary optimization criteria, and subsequently selected as needed through automated or manual means.

It is well within the means of the average professional skilled in the relevant arts to extend the concept of a corrective transaction to more complex scenarios involving multiple transactions of which is desired some group behavior. A common example occurs in practice in the context of process management and workflow. By a process we mean a collection of interdependent transactions (including possibly business transactions, logical transactions, and pseudo-transactions) that transform the state of a set of resources in a well-defined though not necessarily strictly deterministic manner, that manner being identified by a collection of transition rules (integrity constraints) which specify the permissible (partial) orderings of those transactions in time. Certain connected subsets of these transactions may themselves have atomic properties though not all of the ACID transaction properties, and so are considered pseudo-transactions. In some embodiments of a process, some or all of the transactions constituting the process may not be true transactions in the strict sense of the word and may be referred to as tasks, activities, business functions, and the like. (Indeed, the individual operations of any type of transaction can be considered to be a process.)

For example, it may be difficult in practice to enforce the isolation property across these transactions: thus, the result of some transaction deep in the dependent chain (or tree or net) may influence the outcome of some transaction that is not one of those in the atomic group. For practical reasons (performance, lack of control, etc.), we may not be able to use distributed transactions or compensation. Both distributed transactions and compensation may furthermore be undesirable simply because they return the process to an initial state for the atomic group of transactions rather than moving it forward to an acceptable state and meeting acceptable conditions.

The method of corrective transactions permits analysis of a process schema of which a failed transaction is a part, the supplementing of the process as necessary with interactive input, and determination of a partially ordered set of transactions or actions (this set constituting the corrective transaction) that will transition from the current state to a state that is approximately—in terms of consistency goals—the same as would have been achieved had all gone well. How closely the corrected state approximates the one that would have resulted is entirely under the control of the system designer, constrained only by limitations imposed by the intended application or the real world.

A process often contains multiple alternate paths specifying the work to be done and leading to various states or conditions satisfying various consistency conditions, the alternate paths being selected either singularly or severally at a branch point in the process. Thus, from a branch point it may be possible to achieve a certain amount of work and an associated acceptable state in multiple ways, some more “consistent” or more ideal than others. It may even be able to achieve exactly the ideal acceptable state by an alternate path. Such an alternative path constitutes the corrective transaction. It may involve using different resources, require doing some work that would not otherwise have been done, require leaving some otherwise desirable work undone, require supplementing the process with interactive input, and so on.

In further extension to the preferred embodiment of this submethod of the present invention, a cost-benefit approach (similar to that sometimes applied to compensatory transactions) is used. Traditional compensating transactions are used when the combined cost of undo followed by redo is relatively small and has minimal impact on the rest of the system, when there are no context-dependent side-effects involved, when there are commutable transactions at every stage, or when an undo followed by redo is unlikely to cause errors in some other portion of the system (given the resource cost and especially in terms of time delays). Otherwise, a corrective transaction is used to transition directly to an acceptable state which then need not be the original target state.

In a further extension of the preferred embodiment of this submethod of the present invention, this method permits manual input to define and apply the corrective transaction to the current state to reach the desired acceptable state.

In a further extension of the preferred embodiment of this submethod of the present invention, this method uses previously-determined, policy-driven programming implementing pre-set rules of the business to derive, from the difference between the desired acceptable state and the current but incorrect state the nature of the corrective transaction, and then automatically applies the corrective transaction to the current state to reach the desired acceptable state.

In a further extension of the preferred embodiment of this submethod of the present invention, this method uses methods such as goal-oriented programming or genetic algorithms to derive, from the difference between the desired acceptable state and the current but incorrect state the nature of the corrective transaction, and then automatically applies the corrective transaction to the current state to reach the desired acceptable state.

In one alternative extension of the above further extension to the preferred embodiment of this submethod of the present invention, this method uses backward-propagating logic (‘back propagation’) to derive, from the difference between the desired acceptable state and the current but incorrect state the nature of the corrective transaction, and then automatically applies the corrective transaction to the current state to reach the desired acceptable state.

In an alternative extension of the last-named extension of the present invention, the method uses matrix, linear, or other algebraic algorithms to calculate the least-cost, highest-benefit corrective transaction to the current state to reach the desired acceptable state, and then automatically applies the corrective transaction to the current state to reach the desired acceptable state.

In another alternative extension of the present invention, the method uses single-element redefinition algorithms to calculate the least-cost, highest-benefit corrective transaction to the current state to reach the desired acceptable state, and then automatically applies the corrective transaction to the current state to reach the desired acceptable state.

In another alternative extension of the present invention, the method uses any of the above-named techniques to calculate the corrective transaction to be applied to the current state, but only attempts to satisfy the minimally-acceptable set of conditions when attempting to derive the corrective transaction.

In another alternative extension of the present invention, the method uses any of the above-named techniques to calculate which corrective transaction will reach the closest possible alternative end state to the minimally acceptable consistent state, applies the corrective transaction, and then reports the remaining difference for manual implementation of the final step to reach said minimally acceptable consistent state.

By extension, the method of consistency points can be applied to pseudo-transactions, physical transactions, logical transactions, and business transactions.

4 Lookahead-Based Resource Management

Existing resource management methods do not take into account available information about either the operations and resources involved in a transaction, or the transactions (and therefore the resources) involved in a business process. Thus, for example, if a first step involves a request to read a data resource and a subsequent step involves a request to modify that same data resource, the probability of that data resource being found in cache is not influenced by any determination that the subsequent step will or will not require that data resource. Some DBMS products attempt to keep all data resources, once accessed, in cache (or some other high speed storage). Various algorithms may be used for determining when cache, or some portion thereof, can be overwritten (for example, a least recently used algorithm and its many variants). Other DBMS products may influence the probability that certain data resources will be kept in cache for a longer time based on statistical patterns of access. For example, certain types of requests involve sequential reading of large amounts of data resources and it makes sense to “pre-fetch” the next group of data in the expectation that the sequential reading will continue. As another example, certain types of cursor activity in a relational DBMS strongly suggests that the data resource initially read will be subsequently updated, as with SQL requests of the form OPEN CURSOR . . . FOR UPDATE . . . None of these methods has the advantages of pre-determining the need for resources.

Lookahead-based resource management is a submethod of the present invention that enables optimized automation and execution of a transaction or group of transactions, particularly feasible and appropriate for complex transactions as defined above. This is accomplished by making some or all resources (such as data or other resources) that will subsequently be used in processing a transaction or group of transactions explicitly known to the software responsible for processing said transaction or group of transactions in advance of the need to execute said transaction or group of transactions. The optimized management of those resources needed to process the transaction or group of transactions, and possibly other resources, is enabled by means to inform the software responsible for processing and/or optimization (the ‘Transaction Process’) of said resources either by directive or by inference in association with the definition of the request for said processing. This is done by making the definition of one or more steps in a transaction (or group of transactions) known by one of several means to the Transaction Process in advance of the request to process said step or steps. From such an advance definition, the Transaction Process can infer the resources necessary to perform said step or steps. Alternatively, and as a means of further efficiency, the originator of the request definition (whether a human, program, or machine) can incorporate the identification of the resources directly in the definition. As a means of yet further efficiency, the originator can include within the request definition directives that instruct the Transaction Process as to how to optimally manage resources in anticipation of steps of a transaction or group of transactions.

In the preferred embodiment of this submethod, the entire transaction definition is made known to the Transaction Process in advance of the initial request to begin processing that transaction (possibly by name or some other transaction identifier). The definer of the transaction identifies at transaction definition-time the data resources that should be highly favored for cache retention, at what step to begin such favoring, and at what step to remove or reduce that favoring. As a further efficiency, these identifications may be aided through automated techniques such as monitoring the use of resources while the transaction is being run, thereby identifying those resources and determining at which points particular resources are no longer required. In this embodiment, these resources are accessed once and then maintained in cache until the last step that needs said resources. In the event that there is insufficient cache, other secondary methods of cache management may then be used. As a further efficiency, resources are acquired and released at consistency points, thereby reducing the likelihood that an error or rollback condition will force resources to be released. Thus, as a specific example, a transaction containing a step to read some data followed, perhaps with intervening steps, by a step to modify that same data might be predefined as a stored procedure (for example) and invoked by name. Following the transaction definition, the Transaction Process marks the data read (as a consequence of the first step) to be highly favored for retention in cache until the second step completes. The cache management algorithms used by the Transaction Process (well known to those familiar with the art) are augmented to give cache preference to data so marked in an obvious manner. In another embodiment, the Transaction Process identifies the resources needed by each step of the transaction automatically, and further identifies which resources will be needed multiple times, and at what point those resources may be released.

In another embodiment, the Transaction Process further optimizes processing by pre-allocating cache, storage space, locks, or other resources based on advance knowledge of one or more of the steps in the transaction. In another embodiment, the Transaction Process may alter the order of execution of the steps in such a manner such that the intended meaning of the transaction is not altered, but resource management and possibly performance is optimized, as for example, pre-reading all data in such a manner as to reduce disk I/O, to improve concurrency, or improve parallel processing. Other similar and numerous optimizations that become possible when one or more of the steps of a transaction are known in advance of the need to process those steps will be readily apparent to one familiar with the art.

In another embodiment, the definitions of a group of transactions necessary to process a particular application are stored in a repository. When a request is made to run the application, the Transaction Process looks up the definition of the transactions pertaining to said application, including all the steps in each transaction. The Transaction Process then determines the resources necessary to perform each step, determining at which step said resources must be first acquired, at which step they will last be used, and at which step they can be first released. (In an alternative embodiment, the repository also contains identification of all resources necessary to perform those steps, said resources having been previously identified either by software or human means. In yet another alternative embodiment, the repository also contains the relative time of said first acquisition, final use, and first possible release of each required resource.) The Transaction Process then applies any of numerous optimization methods well-known or accessible to one familiar with the art to optimize management of resources in its environment including, for example, data caching, lock management, concurrency, parallelism, and the like.

5 Dependency-Based Concurrency Optimization

The method of dependency-based concurrency optimization enables a scheduling facility to restructure the steps or operations in a collection of one or more transactions so as to optimize concurrency and efficiency. By restructuring we mean changing either the order or the context of execution of transactions, steps, or groups of steps so as to be different from that order or context in which those transactions, steps, or groups of steps were submitted. The purpose of this method of “static scheduling” is to determine which transactions can absolutely be run together without interference, not which ones cannot. If there is doubt, traditional dynamic scheduling can be used. Dependency-based concurrency optimization is an improvement upon traditional transaction classes and traditional conflict graph analysis in that it provides a new means to determine dependencies and to respond to them using transaction restructuring. By augmenting the definition of a transaction (or group of transactions) with the dependencies among steps or groups of steps of said transaction and its consistency points, whether by human or computer means, the identification of which steps must be performed in which order can be determined using means well-known to those familiar with the art, including manual means. This information enables a computer system capable of parallel or concurrent processing to perform those steps or groups of steps which satisfy certain criteria to be performed in parallel or in an order different from the order in which they are submitted, and possibly at the discretion of an optimizer component. In particular, steps or groups of steps which can optionally be performed in parallel or in a different order are those that (1) have no mutual dependencies and (2) are not dependent on any other steps that have not yet been performed. Said dependencies information and said consistency point information may be supplied by any of a number of means. For example, each dependence between every pair of steps might be supplied as a simple instruction “(1,2), (1,3)”, meaning that step 2 depends on step 1 and step 3 depends on step 1. Alternatively, the entire set of partially ordered dependencies might be supplied as a single data structure consisting of, for example, a linked list of trees with each tree specifying dependencies (a ‘dependency tree’), the linked list simply being one possible means of collecting the dependency trees. Similarly, steps can be grouped together such that they have no dependencies with any steps not in the group and such that, if they begin execution on resources that are in a consistent state, then those resources are left in a consistent state when that group of steps complete, such a grouping being known as a ‘consistent group’. A consistent group bounded by durable consistency points satisfies the formal definition of a transaction, albeit an implicit transaction. For example, if steps 1, 2, and 3 form such a group of steps bounded by consistency points and with the dependencies in the previous example, both dependency and consistency point information might be supplied via the instruction “<(1,2), (1,3)>”. Numerous other means for supplying such information will be apparent to one familiar with the art, some means being optimally non-redundant, some optimal for human specification, some optimal for space, some optimal for processing time, and some optimal for yet other purposes. In said augmentation, each dependency can be specified in such a manner as to uniquely identify both transactions and the steps of those transactions. For example, each transaction might be given a unique transaction identifier and each step an identifier unique within that transaction. Then, a dependency specification such as “(A.1, A.2), (A.1, B.3)” can be given at any time after the referenced transaction steps are specified.

In the event that a transaction definition is not augmented with dependencies among the steps and among a group of transactions, dependencies can be determined automatically or semi-automatically via, for example, the following methods:

if two transaction steps or groups of steps do not touch the same resources, they are independent (although they may be transitively dependent);

if two transaction steps or groups of steps have same ultimate result irrespective of the order of application (that is, if they are commutative), they are independent;

if two transaction steps or groups of steps have no applicable consistency conditions in common, they are independent; or,

if two transaction steps or groups of steps cannot both violate at least one consistency condition, thereby producing the same error, they are independent.

Transaction steps or consistent groups which execute within a single application instance and that are independent may be restructured. Consistent groups or transactions that are independent (that is, all of the steps in one consistent group or transaction are independent of all the steps in the other consistent group or transaction, respectively) may be restructured even if they run in separate application instances and with transaction isolation guaranteed by the system. As is well known, this fact enables the execution of such transactions without the overhead of locking (used to enforce isolation in pessimistic concurrency control) or the overhead of conflict detection mechanisms (used in optimistic concurrency control), thereby further optimizing the performance of transaction processing, so long as only mutually independent transactions are executed concurrently.

By extension, transactions that do not meet the mutual independence criteria may be simultaneously scheduled using some other method (the ‘local method’) to maintain concurrency and isolation (such as two-phase locking) provided that every collection of mutually independent transactions (or consistent groups) is isolated from each other and from all other transactions. Insofar as the ‘local method’ is concerned, each collection of mutually independent transactions (or consistent groups) is made to appear as a single transaction. For example, if two-phase locking is the local method, locks are maintained for each collection of mutually independent transactions (or consistent groups) as if they were a single transaction, and transactions (or consistent groups) within the collection read through all locks held by the collection but respect locks held by transactions outside the collection.

The method of dependency-based concurrency optimization may be extended with the concept of “conflict classes.” Transactions are divided into classes, and possibly belong to multiple classes. Each pair of classes is specified as being either dependent (potentially in conflict) or independent (impossible to ever be in conflict). If a transaction is not yet classified, it is evaluated to determine with which classes it potentially conflicts and with which it is independent. To belong to a class, the transaction must be potentially in conflict with every transaction in the said class. If the transaction matches the dependency and independency properties of the said class with respect to all other classes, it belongs to the said class; otherwise, it belongs to a different class. If no existing class meets these criteria, the transaction belongs to a new class. Transaction definitions are uniquely identified, and are recorded as belonging to a particular class based on that transaction identifier. Transactions are invoked by transaction identifier. Whenever a transaction request is received with such an identifier (or some means which permits association with such an identifier), the scheduler determines the classes to which the transaction belongs and from this information obtains the list of classes with which it is potentially in conflict (the dependent classes). It then checks to see if any running transaction belongs to one of the dependent classes. If such a transaction is running, the desired transaction is either deferred until that transaction completes, or another method of guaranteeing transaction isolation is used. If no such transaction is running, the desired transaction is executed.

Refinements of the technique are possible. In one embodiment, for example, the classifying of transactions is done at the transaction step level and it is then possible to schedule concurrent transaction steps from multiple transactions (as will be apparent to anyone familiar with the art). In this embodiment, each subsequent step of a transaction must be shown to be independent of all preceding and current steps of running transactions before it is permitted to run. In another embodiment, concurrent transactions proceed step by step until a possible conflict based on classes is detected, at which point one transaction is either deferred until the other transaction completes or else is rolled back to a consistency point (possibly the beginning of the transaction) and either resubmitted or a corrective transaction is submitted.

6 Combined Implementation (the Preferred Embodiment)

The preferred embodiment of the present invention in implemented in software (the ‘Adaptive Transaction Manager’) on a distributed network of computers with a distributed database management system implementing a business process involving multiple business entities. The business process consists of a large number of transactions, tasks, activities, and other units of work, many of them complex and some of them of an ad-hoc nature such that the entirety of their constituent steps or operations are not knowable in advance. The Adaptive Transaction Manger automatically identifies dependencies, consistency points, consistent groups, and redundant consistent groups. If a deadlock or other failure is encountered, the Adaptive Transaction Manager automatically recovers by rollback to a consistency point that eliminates the source of the error and then attempts to redo the work (it aborts only after retrying a pre-determined number of times or after a pre-determined amount of work). Redundant consistent groups are eliminated using transaction relaying, since two concurrent transactions having the same consistent group may share the work done by that group. A combination of transaction relaying, restructuring, and corrective transactions are used to eliminate most distributed transactions. When an error occurs, the error is classified according to whether it represents a transient, semantic, or hardware failure. If it is transient or hardware, transaction rollback to the most recent consistency point is invoked and the intervening work is resubmitted. This sequence is repeated for up to a fixed number of times and possibly with an intervening time delay (both determined by the type of error) until the transaction either succeeds or the number is surpassed. If the number of repetitions is surpassed, transaction rollback to an earlier consistency point is invoked and the work resubmitted. This process continues until the system recovers. If the error is semantic, the Adaptive Transaction Manager determines which prior consistency point will provide a starting point of an alternate path within the business process that best leads to an acceptable state, preferably with the least effort and best chance of successful completion. It then invokes one or more corrective transactions that together are functionally and semantically equivalent to that alternate path. The Adaptive Transaction Manager optimizes for efficiency through the use of lookahead resource management and dependency-based concurrency optimization, restructuring transactions and consistent groups where possible to minimize overhead (for example, due to locking).

The Adaptive Transaction Manager can rollback the system to a consistency point if there is an error that cannot be compensated for, if the cost of the compensation exceeds the value gained by the correction, or for other similar reasons. In a further enhancement of the preferred embodiment, the system record of the data and resources used in each transaction is used to hand off responsibility and control over the data and resources from one transaction to the next as each completes, that is, as each reaches a consistency point. Only those data and resources which are fully and correctly ‘transitioned’ are handed off, allowing auditable and non-interfering distribution or partial branching to occur without the hazard of contaminating data or processes, and without incurring the overhead of both multiple copies of data and tracking the current ‘correct’ subset. In this sense, a transaction that has reached a partial state which is correct for all other transactions for a subset of the data and resources it uses alone, can commit and release those data and resources rather than continue to tie them up needlessly.

Under the preferred embodiment of the present invention, the Adaptive Transaction Manager actively uses dependencies to detect which transaction needs to own what data and what resources at each particular step along a complex transaction, and minimizes duplication and locking of the same. Moreover, variable exploration of alternatives becomes feasible by implementing, in a further extension of the preferred embodiment, alternative methodologies for controlling such data and resources. For example, voting rules may be used (three processors to two), hierarchical rules (home office database overrules local branch), or heuristically derived rules peculiar to a particular business or operation.

In the preferred embodiment of the invention, whenever a compensating or corrective transaction is needed, a full audit trail of the original acceptable state, mistaken state, compensating or corrective transaction, and final acceptable state is maintained. In a further extension of the preferred embodiment the log of individual error audits is analyzed to identify recurring problems and suggest where additional preventative efforts be taken, including additional corrective transactions.

In the preferred embodiment, for each predetermined transaction the anticipated consistency category of the final state is registered with the Adaptive Transaction Manager, classes of errors are associated with corresponding classes of recovery methods (including compensating or corrective transactions), and the Adaptive Transaction Manager determines which compensating or corrective transactions to execute so that recovery to an acceptable state can take place automatically and consistently. Additionally, the Adaptive Transaction Manager maintains a log of ‘acceptable’ states as transactions are processed without uncompensated errors. The extent to which the Adaptive Transaction Manager allows transitions to become permanent depends now more on the level of accuracy which the business feels comfortable with than upon the static limitations of record-keeping.

Extensions to the preferred embodiment would make the system more applicable for particular business purposes including telecommunications rerouting; inventory management for retail or distributional operations that encounter spillage, wastage, or theft; electronic funds transfer message repair; financial transactions affected by governmental fiats; and billing systems reflecting or affected by collection processes, debtor failures, and bankruptcies.

In a further extension of the present invention this method is applied to a model for negotiations allowing exploration of hypothetical or proposed solutions, and their consequences and costs, to be evaluated.

In a further extension of the present invention this method is applied to asset exchanges where the parties do not have an initial agreement as to the value of the particular elements, or even agreement as to the particular elements that are the subject of the proposed exchange, beforehand, to allow intermediate positions to be evaluated and the costs and benefits of concessions and tradeoffs to be explicitly assessed.

However, the scope of this invention includes any combination of the elements from the different embodiments disclosed in this specification, and is not limited to the specifics of the preferred embodiment or any of the alternative embodiments mentioned above. Individual user configurations and embodiments of this invention may contain all, or less than all, of the elements disclosed in the specification according to the needs and desires of that user. The claims stated herein should be read as including those elements which are not necessary to the invention yet are in the prior art and may be necessary to the overall function of that particular claim, and should be read as including, to the maximum extent permissible by law, known functional equivalents to the elements disclosed in the specification, even though those functional equivalents are not exhaustively detailed herein.

Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US7516128 *Nov 14, 2006Apr 7, 2009International Business Machines CorporationMethod for cleansing sequence-based data at query time
US7617180 *Jun 6, 2005Nov 10, 2009Infoblox Inc.Efficient lock management
US7725728 *Mar 23, 2005May 25, 2010Business Objects Data Integration, Inc.Apparatus and method for dynamically auditing data migration to produce metadata
US7797329 *Jun 9, 2006Sep 14, 2010Oracle America Inc.Method and system for enabling a synchronization-free and parallel commit phase
US7822792Dec 15, 2006Oct 26, 2010Sap AgAdministration of planning file entries in planning systems with concurrent transactions
US8015176 *May 21, 2008Sep 6, 2011International Business Machines CorporationMethod and system for cleansing sequence-based data at query time
US8108834 *Mar 21, 2007Jan 31, 2012International Business Machines CorporationDefining and executing processes using declarative programming language constructs
US8145944Sep 30, 2009Mar 27, 2012International Business Machines CorporationBusiness process error handling through process instance backup and recovery
US8255648 *Oct 8, 2009Aug 28, 2012International Business Machines CorporationMaintaining storage device backup consistency
US8327377Apr 30, 2009Dec 4, 2012Ca, Inc.Detecting, logging and tracking component dependencies in web service transactions
US8429451 *Oct 14, 2009Apr 23, 2013International Business Machines CorporationMethod of handling a message
US8554967 *Jun 16, 2009Oct 8, 2013Freescale Semiconductor, Inc.Flow control mechanisms for avoidance of retries and/or deadlocks in an interconnect
US8556166 *Jul 31, 2008Oct 15, 2013Bank Of America CorporationCorrelation of information to a transaction in a cash handling device
US8667329 *Dec 15, 2009Mar 4, 2014Ab Initio Technology LlcProcessing transactions in graph-based applications
US8726079 *Mar 11, 2013May 13, 2014International Business Machines CorporationHandling of messages in a message system
US8806502 *Sep 13, 2011Aug 12, 2014Qualcomm IncorporatedBatching resource requests in a portable computing device
US20100095165 *Oct 14, 2009Apr 15, 2010International Business Machines CorporationMethod of Handling a Message
US20100115218 *Oct 8, 2009May 6, 2010International Business Machines CorporationMaintaining storage device backup consistency
US20100318713 *Jun 16, 2009Dec 16, 2010Freescale Semiconductor, Inc.Flow Control Mechanisms for Avoidance of Retries and/or Deadlocks in an Interconnect
US20110055835 *Jul 22, 2010Mar 3, 2011International Business Machines CorporationAiding resolution of a transaction
US20110078500 *Dec 15, 2009Mar 31, 2011Ab Initio Software LlcProcessing transactions in graph-based applications
US20120151003 *Dec 9, 2010Jun 14, 2012Neil Hamilton MurrayReducing latency in performing a task among distributed systems
US20120239812 *Sep 13, 2011Sep 20, 2012Qualcomm IncorporatedBatching resource requests in a portable computing device
Classifications
U.S. Classification705/39, 714/E11.131
International ClassificationG06Q40/00, G06F17/30, G06F12/00, G06Q30/00
Cooperative ClassificationY10S707/99953, Y10S707/99938, G06F2201/82, G06F11/1474, G06Q20/10, G06Q30/06
European ClassificationG06Q30/06, G06F11/14A14, G06Q20/10