US 20040044662 A1 Abstract Methods of optimizing access to a relation queried through a number of predicates. The methods identify one or more candidate predicates of the selection condition that can be used to factorize the selection condition. A gain from using one or more of the candidate predicates to factorize the selection condition is computed. One or more of the candidate predicates that result in a positive gain are factored from the selection condition to produce a rewritten selection condition. The candidate predicates can be predicates that appear exactly in the selection condition more than once and/or merged predicates that may be predicates in the selection condition that overlap.
Claims(75) 1. In a relational database having records stored on a medium, a method of optimizing processing of a query with a record selection condition that includes a number of predicates, comprising:
a) identifying one or more candidate predicates of the selection condition that can be used to factorize the selection condition; b) computing a gain from using a candidate predicate to factorize the selection condition; c) factorizing a candidate predicate that results in a positive gain from said selection condition to produce a rewritten selection condition; and d) processing said query with said rewritten selection condition. 2. The method of 3. The method of 4. The method of 5. The method of 6. The method of 7. The method of 8. The method of 9. The method of 10. The method of 11. The method of 12. The method of 13. The method of 14. The method of 15. The method of 16. The method of 17. The method of 18. The method of 19. The method of 20. The method of 21. The method of 22. The method of 23. The method of 24. The method of 25. The method of 26. The method of 27. In a database having records stored on a medium, a method of optimizing processing of a query with a record selection condition that includes a number of predicates, comprising:
a) identifying one or more merged predicates produced by the union of overlapping predicates of the selection condition that can be used to factorize the selection condition; b) computing a gain from using a candidate merged predicate to factorize the selection condition; c) substituting the a merged predicate that results in a positive gain for overlapping predicates used to produce the merged predicate being substituted; d) factorizing the merged predicate substituted for overlapping predicates from said selection condition to produce a rewritten selection condition; and e) processing said query with said rewritten selection condition. 28. The method of 29. The method of 30. The method of 31. The method of 32. In a database having records stored on a medium, a method of optimizing processing of a query with a record selection condition that includes a number of predicates, comprising:
a) identifying a set of predicates on a given attribute; b) identifying overlapping predicates in the set; c) producing merged predicates by the union of overlapping predicates in the set; d) identifying one or more merged predicate that produce a greater gain than the overlapping predicates used to form the candidate merged predicates; e) adding the one or more candidate merged predicates to said set of predicates; f) selecting a predicate from said set that produces the greatest gain; g) factorizing the predicate selected from the set from said selection condition to produce a rewritten selection condition; and h) processing said query with said rewritten selection condition. 33. The method of 34. The method of 35. The method of 36. The method of 37. The method of 38. The method of 39. The method of 40. The method of 41. The method of 42. The method of 43. The method of 44. The method of 45. The method of 46. The method of 47. The method of 48. The method of 49. The method of 50. The method of 51. In a database having records stored on a medium, a method of optimizing processing of a query with a selection condition that includes a number of predicates, comprising:
a) identifying an exact factor predicate that is an exact factor of the selection condition and produces a gain when used to factorize the selection condition; b) identifying a merged predicate produced by the union of overlapping predicates of the selection condition that produces a gain when used to factorize the selection condition; c) determining which of the exact factor predicate and the merged predicate produces a greater gain when used to factorize the selection condition; d) factorizing the exact factor predicate from said selection condition to produce a rewritten selection condition when said exact factor predicate produces said greater gain; e) factorizing the merged predicate from said selection condition to produce a rewritten selection condition when said merged predicate produces said greater gain; and f) processing said query with said rewritten selection condition. 52. The method of 53. The method of 54. The method of 55. The method of 56. The method of 57. The method of 58. A computer readable medium having computer executable instructions stored thereon for performing a method of optimizing processing of a query with a selection condition that includes a number of predicates, the method comprising:
a) identifying one or more candidate predicates of the selection condition that can be used to factorize the selection condition; b) computing a gain from using a candidate predicate to factorize the selection condition; c) factorizing a candidate predicate that results in a positive gain from said selection condition to produce a rewritten selection condition; and d) processing said query with said rewritten selection condition. 59. The computer readable medium of 60. The computer readable medium of 61. The computer readable medium of 62. The computer readable medium of 63. The computer readable medium of 64. The computer readable medium of 65. The computer readable medium of 66. The computer readable medium of 67. The computer readable medium of 68. The computer readable medium of 69. The computer readable medium of 70. A computer readable medium having computer executable instructions stored thereon for performing a method of optimizing processing of a query with a selection condition that includes a number of predicates, the method comprising:
a) identifying one or more merged predicates produced by the union of overlapping predicates of the selection condition that can be used to factorize the selection condition; b) computing a gain from using a candidate merged predicate to factorize the selection condition; c) substituting the a merged predicate that results in a positive gain for overlapping predicates used to produce the merged predicate being substituted; d) factorizing the merged predicate substituted for overlapping predicates from said selection condition to produce a rewritten selection condition; and e) processing said query with said rewritten selection condition. 71. The computer readable medium of 72. A computer readable medium having computer executable instructions stored thereon for performing a method of optimizing processing of a query with a selection condition that includes a number of predicates, the method comprising:
a) identifying a set of predicates on a given attribute; b) identifying overlapping predicates in the set; c) producing merged predicates by the union of overlapping predicates in the set; d) identifying one or more merged predicate that produce a greater gain than the overlapping predicates used to form the candidate merged predicates; e) adding the one or more candidate merged predicates to said set of predicates; f) selecting a predicate from said set that produces the greatest gain; g) factorizing the predicate selected from the set from said selection condition to produce a rewritten selection condition; and h) processing said query with said rewritten selection condition. 73. The computer readable medium of 74. A computer readable medium having computer executable instructions stored thereon for performing a method of optimizing processing of a query with a selection condition that includes a number of predicates, the method comprising:
a) identifying an exact factor predicate that is an exact factor of the selection condition and produces a gain when used to factorize the selection condition; b) identifying a merged predicate produced by the union of overlapping predicates of the selection condition that produces a gain when used to factorize the selection condition; c) determining which of the exact factor predicate and the merged predicate produces a greater gain when used to factorize the selection condition; d) factorizing the exact factor predicate from said selection condition to produce a rewritten selection condition when said exact factor predicate produces said greater gain; e) factorizing the merged predicate from said selection condition to produce a rewritten selection condition when said merged predicate produces said greater gain; and f) processing said query with said rewritten selection condition. 75. The computer readable medium of Description [0001] This disclosure generally concerns the field of databases. This disclosure relates more specifically to methods of optimizing multi-index access to a relation through a number of predicates. [0002] Modern relational database systems continue to process increasingly complex queries. Several reasons contribute to this trend. Queries submitted to relational database systems are increasingly being generated by applications. These queries tend to be more redundant and complex than human-typed queries. Additionally, a recent focus on extending relational database management systems (DBMS) with advanced functionality, like data warehousing and data mining, places a demand for fast execution of queries with complicated predicates. [0003] A straightforward way to evaluate a complex expression is to use a sequential scan on the table and evaluate the condition as a filter. When the selectivity of some of the predicates is small, a cheaper alternative is to use one or more indexes and combine their record identifiers (RIDs) using index-intersection and union (IIU) operators before fetching the data pages from disk. Thus, for example a condition of the form X AND Y (written as XY) can be evaluated by generating record identifier lists for each of X and Y and then intersecting the two lists. Similarly, a condition X OR Y (written as X+Y) can be evaluated by using the union of the two record identifier lists for X and Y. In general, any multi-predicate selection condition can be evaluated by a series of index intersections and unions (IIU), which eventually leave a list of record identifiers to be fetched from the data table. [0004] There are many possible intersection and union (IIU) plans for a given selection query. For example, a condition C [0005] The preceding paragraph is a straightforward example of how a query plan can be improved if it is slightly restructured. However, the general problem of identifying better plans for an arbitrarily complex boolean expression is not easy. The following example shows that disjunctive normal form (DNF) does not guarantee the best plan. Consider another condition C [0006] Existing query optimizers approach this problem in three ways. The first approach is to use a sequential scan followed by a filtered evaluation of the condition, particularly when the selection condition is very large and contains several disjuncts and conjuncts. The second approach is to generate an index intersection and union plan directly from the form in which the condition is represented in the query without searching the space of index intersection and union plans for the best plan. The third approach is to rewrite the query in conjunctive normal form or in the disjunct form, neither of which is optimal in all cases. [0007] Minimizing boolean expressions through factorization is a problem in several areas of computer science. In compiler optimization it is useful for generating optimal programs for evaluating boolean expressions, in object-oriented databases for ad hoc evaluation of queries with boolean predicates, in relational database systems for factoring join relations on disjuncts and in VLSI circuit design for reducing floor area of circuits. Of these, the boolean minimization problem has been studied most extensively in the VLSI literature. [0008] Factoring boolean formiae to minimize the total number of literals is an important problem in VLSI design because the area taken up by a circuit for a boolean formula is roughly proportional to the number of literals in the formula. This problem is NP-hard and computing the optimal solution is computationally infeasible even for relatively small functions. However, the practical relevance of the logic minimization problem in VLSI has led to the design of several algorithms with various levels of complexity. These approaches can be grouped into three categories—algebraic, boolean and graph-theoretic. None of these approaches offer any guarantees about the quality of the factorization. Of the three classes of factoring methods, algebraic factoring is the most popular since it provides very good results while being extremely fast. Query optimization is different from the VLSI logic minimization, because every literal a boolean formula that represents a query is associated with a different fixed cost that depends on the literal's selectivity and index. Also, unlike the problem of VLSI logic minimization, the size of intermediate results is also important for query optimization since it affects index-intersection and index-union cost. [0009] Early relational database management systems rewrote query expressions as a conjunctive normal form expressions and exploited only one index per expression. Others rewrote the expressions as a disjunctive normal form expression and union-ed each disjunct evaluated independently. These simple approaches were augmented in later systems to exploit multiple indexes by evaluating them as arbitrary index intersection and union plans. These approaches attempted to choose the best subset of eligible indexes and sequence the record identifier mergings for best performance. However, these approaches operate on the condition as directly expressed in the query and do not explore the space of alternative rewrites. [0010] Techniques for optimization of user-defined predicates with varying costs of evaluation and selectivity have been developed. These techniques attempt to reduce the number of invocations of the user-defined predicates while leveraging its selectivity. These techniques are for non-indexed access and CPU cost minimization. These techniques either concentrate on ANDs alone, are not applicable for indexed access or do not attempt to factorize common predicates across them. In these techniques, the focus is on reducing the number of invocations of the expensive functions. [0011] There is a need for a method that identifies an optimized index intersection and union-based plan for a query, by searching the space of several possible rewrites of the boolean expression. The expression is rewritten using exact and/or relaxed factors to improve the cost of indexed access to a relation. [0012] The present application concerns a method of optimizing processing of a query with a record selection condition that includes a number of predicates. In the method one or more candidate predicates of the selection condition are identified that can be used to factorize the selection condition. A gain from using a candidate predicate to factorize the selection condition is computed. The candidate predicate that results in a gain, preferably the greatest gain, is factorized from the selection condition to produce a rewritten selection condition. The same process may then be applied to the quotient and remainder of the factorization to further rewrite the selection condition. The query is processed with the rewritten selection condition. [0013] Two approaches that may be used to select predicates for factorization are a greedy approach and dynamic programming approach. In the greedy approach, the candidate predicate that will result in the greatest gain is first factorized from the selection condition. In the dynamic programming approach, candidate predicates are chosen for multiple subsets of disjuncts of the selection condition. First, every pair of disjuncts are compared to identify common factors, if any. For every such pair of disjuncts, the gain from performing the factorization is computed. The process is then repeated for every possible set of three disjuncts, every possible set of four disjuncts, and so on, until the set of disjuncts is the entire selection condition. Computing the gain for such multiple sets of disjuncts can be done efficiently by using the computed gains of smaller subsets of these multiple sets. [0014] In various embodiments of the application, the candidate predicates are predicates that act on an indexed attribute. The gain computed is the cost savings obtained by reducing a number of index accesses for the candidate predicate, added to the cost savings from reducing the number of index intersections as a result of factorizing the candidate predicate, reduced by the cost increase from increased index-union costs that result from factorizing the candidate predicate. The gain is computed for all predicates that are literal divisors of the selection condition and the predicate that produces the largest gain is identified. The method is recursively applied to a rewritten selection condition. The candidate predicates are exact factors and/or merged predicates of the selection condition. When a merged predicate is used to factorize the selection condition, the results obtained from evaluating the rewritten selection condition that do not satisfy the original selection condition are filtered out. [0015] In one embodiment, a method for merging predicates that may be used to factorize the selection condition is used to produce merged predicates. In this embodiment, a set of predicates on a given attribute is identified. Predicates with an overlapping range on a given attribute are identified. Merged predicates are produced by the union of overlapping predicates in the set. The merged predicates that produce a greater gain than the overlapping predicates used to form the merged predicates are identified. The identified merged predicates are added to the set of predicates and the overlapping predicates used to form identified merged predicates are deleted from the set. The predicate in the set that produces the greatest gain is selected. The selected predicate is used to factorize the selection condition. [0016] In one embodiment, exact factor predicates and merged factor predicates are evaluated to identify a factor that results in the greatest gain. In this embodiment, exact factor predicates that are an exact factor of the selection condition and produce a gain when used to factorize the selection condition are identified. Similarly, merged predicates produced by the union of overlapping predicates of the selection condition that produce a gain when used to factorize the selection condition are identified. The method then determines whether an exact factor predicate or a merged predicate produces a greater gain when used to factorize the selection condition. When an exact factor predicate produces the greatest gain, the exact factor is factorized from the selection condition to produce a rewritten selection condition. When a merged predicate produces the greatest gain, the merged predicate is factorized from the selection condition to produce a rewritten selection condition. The query is processed with the rewritten selection condition. [0017] In one embodiment, the disclosed methods are adapted to perform factorization efficiently even when the original expression is not in Disjunctive Normal Form(DNF). Since it might prove expensive to convert an expression to DNF before performing factorization, methods are disclosed that can perform factorization on arbitrarily nested AND-OR trees. The algorithms may be adapted to produce plans that may be necessary because of constraints faced by the optimizer. The disclosed process identifies the best single-index plan for accessing data in a table, the best index-intersection-based plan, and the best plan that involves a series of index intersections followed by a series of index unions, interspersed with data lookups. In one embodiment, the process produces a plan based on arbitrary sequences of index intersections, index unions and data lookup. [0018] The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which: [0019]FIG. 1 illustrates an exemplary operating environment for a system for evaluating database queries; [0020]FIG. 2 is a block diagram of a system for evaluating database queries according to an embodiment of the invention; [0021]FIG. 3 is a flowchart representation of method steps used to practice an embodiment of the present invention; [0022]FIG. 4 is a flowchart representation of method steps used to practice an embodiment of the present invention; [0023]FIG. 5 is a flowchart representation of method steps used to practice an embodiment of the present invention; [0024]FIG. 6 is a flowchart representation of method steps used to practice an embodiment of the present invention; [0025]FIG. 7 is a flowchart representation of method steps used to practice an embodiment of the present invention; [0026]FIG. 8 is a flowchart representation of method steps used to practice an embodiment of the present invention; [0027]FIG. 9 a flowchart representation of method steps used to practice an embodiment of the present invention; and [0028] FIGS. [0029] Exemplary Operating Environment [0030]FIG. 1 and the following discussion are intended to provide a brief, general description of a suitable computing environment in which the disclosed methods may be implemented. Although not required, the disclosed methods will be described in the general context of computer-executable instructions, such as program modules, being executed by a personal computer. Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the invention may be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices. [0031] With reference to FIG. 1, an exemplary system for implementing the disclosed methods includes a general purpose computing device in the form of a conventional personal computer [0032] A number of program modules may be stored on the hard disk, magnetic disk [0033] Personal computer [0034] When using a LAN networking environment, personal computer [0035] Database System [0036]FIG. 2 illustrates for one embodiment a computer database system [0037] Database [0038] Database server [0039] Database server [0040] Optimization Method [0041] The disclosed method optimizes multi-index access to a relation queried through a large number of predicates. Exploiting multiple indexes to access a table is a well-known optimization technique for queries with multiple predicates. Traditionally, an access plan that is generated is dependent on the exact form of the selection condition in the query. The disclosed methods rewrite selection conditions using common factors to yield better access plans for exploiting the various indexes on the relation. This application discloses an efficient factorization algorithm. In one embodiment, conditions in a conjunct of a complex condition are relaxed to enable better factoring. Algorithms disclosed in this application have been evaluated experimentally to identify the performance improvement for various query characteristics. [0042] This disclosure addresses the problem of optimizing single-relation queries with a complex selection condition involving multiple atomic predicates. While the optimization of queries where complexity is due to a large number of joins has received a lot of attention in the database literature, optimization of complex selection conditions involving multiple AND and OR operations has not been widely addressed. In the exemplary embodiment, this strategy can also be generalized to cases where not all predicates in the selection condition have a corresponding index. [0043] This disclosure discusses exact factoring, condition relaxation, and integrating factoring with index selection to identify an optimized boolean expression of the complex condition. [0044] Overview of Exact Factoring [0045] Complex conditions often contain atomic predicates repeated across many disjuncts. In this application, atomic predicates that are included in two ore more disjuncts are referred to as exact factors. It is more likely that atomic predicates will be repeated across disjuncts when predicates are blindly normalized to a standard disjunctive normal format. Exact factoring exploits such repetition to reduce access cost. [0046] Overview of Condition Relaxation [0047] Often, predicates in a boolean expression might overlap even if they are not quite identical. This application discloses a novel condition-relaxation technique for merging overlapping predicates that are not amenable to normal syntactic factorization. For example, E [0048] Overview of Integrating Factoring with Index Selection [0049] In the exemplary embodiment, the exact and condition relaxation factoring algorithms assume that all atomic predicates with an eligible index will be evaluated via index scan. This may be suboptimal for predicates that are not very selective. Unfortunately, the decision to use an index for a predicate scan is both influenced by, and influences, the factoring decision. In one embodiment, the query optimizer [0050] The output of query optimization is an index intersection and union plan (IIU plan) that corresponds to a factored form of the original expression, generated by exact and approximate factoring. The IIU plan generates a record identification list which will be used to fetch the data pages on which any remaining part of the expression will be evaluated. [0051] In some instances, it may be possible to evaluate a query without accessing the data pages at all. These are called index-only plans. In one embodiment, Index-Only plans are output if they turn out to be more efficient. [0052] In one embodiment, the disclosed methods are used to contribute to optimization of arbitrary select-project-join queries. Optimizing select-project-join queries involves reordering the joins, choosing a specific join strategy for each of joins, and providing a plan for retrieving tuples satisfying a selection condition from the base tables. The disclosed algorithms can be used in a module that is called by the query optimizer to determine how best to access a base table, given a complex selection condition. For some base relations of a join, the inner relation of a nested-loop-with-index join, optimization of a selection condition may not be necessary. Even so, the information produced would be useful in helping the query optimizer choose between different join strategies. [0053] Exact Factoring [0054] Queries with a complex selection condition C that involves multiple predicates may be ANDed and ORed together. A table R may have indexes on a subset of the attributes that appear in selection condition C. The disclosed methods optimize the access path to the table R using one or more of these indexes. For this optimization, the part of the condition C left after setting to “true” the predicates on non-indexed attributes is evaluated. These non-indexed predicates are evaluated in a filter phase after the data pages are fetched. In one embodiment, all these remaining indexed predicates will be evaluated via index scans. [0055] In one embodiment, different index intersection and union plans are proposed and an optimized index intersection and union plan is determined for a condition C using exact factoring. The different intersection and union plans have a direct correspondence with the representation of the boolean formulas. Therefore, reference to “expressions” and their evaluation cost refers to the evaluation cost of the corresponding index intersection and union plan. [0056] An expression in the form A+BC is generally cheaper to evaluate than the expression (A+B)(A+C) using index intersection and union. Similarly, an expression represented as AB+AC often tends to be evaluated much less efficiently than an expression represented as A(B+C). In general, the “optimal” form of a condition is neither conjunctive normal form nor disjunctive normal form, but an intermediate factored form. This problem is similar to the problem of VLSI logic minimization, where the objective is to find a formula for a given function that uses the fewest number of literals. The problem of optimizing query selection conditions is more complex, since there are different costs associated with each literal due to there different selectivities and retrieval costs. Adding to the complexity of this problem is the fact that there are different costs associated with each literal due to different selectivities and retrieval costs of the literals. In addition, there is an associated index-intersection/union cost with each AND and OR operator in the expression. The “best” expression may not necessarily contain the fewest literals. [0057] Overview of Generic Factoring of Disjunctive Normal Form Expressions [0058] A function f in disjunctive normal form may be factorized. For example, a function f=xyz+wx+wyz is in disjunctive normal form. In this disclosure, each disjunct (e.g. wx) in a disjunctive normal form formula is referred to as a term and each atomic condition like x or y is referred to as a literal. [0059] In one embodiment, the disclosed algorithm assumes that no term of f is directly contained in some other term, thus rendering it redundant. However, a term may be present even if the term happens to be covered by a combination of two or more other terms in this embodiment. [0060] In one disclosed embodiment, f is factorized in the following form:
[0061] such that divisor D and quotient Q do not have any literals in common. This, in combination with other technical conditions, is referred to as algebraic division. The function f-xyz+wx+wyz could be factorized as f=(yz)(x+w)+wx. In the exemplary embodiment, the boolean expression is treated just like an algebraic expression and it is factorized it into a divisor D, quotient Q and remainder R. In the exemplary embodiment, divisor D, quotient Q and remainder R are recursively factorized. In one embodiment, a greedy algorithm is used with no guarantee of optimality. In the exemplary embodiment, hill-climbing approach to selecting the best divisor D at each step ensures a practical solution to this otherwise NP-hard problem. In one embodiment, an existing efficient algorithm for computing Q and R, given f and D is used. One such algorithm is disclosed in R. K. Brayton, Factoring Logic Functions, [0062] The divisor D can be chosen from: [0063] 1. Any repeated atomic literal, or [0064] 2. An expanded space also including expressions on the literals. (Described in Brayton, Factoring Logic Functions and R. K. Brayton et al., A Multiple-Level Logic Optimization System, [0065] In the exemplary embodiment, literal divisors are used both because of efficiency considerations, and because literal divisors provide acceptable results for selection query applications. When an expression includes a conjunction of a number of literals, the selectivity of the conjunction is likely to drop to a very low value by intersecting even three or four literals. Thus, after a few intersections, it may be cheaper to retrieve the actual data rather than to continue performing index intersections. Therefore, being able to effectively factorize to arbitrary depths is not particularly important in a query optimization application, in contrast to logic minimization. Literal-factoring provides speed and optimality for query-optimization applications. [0066] The Cost Model [0067] The disclosed method uses a cost model to choose between various index intersection and union plans. However, in the exemplary embodiment the disclosed factorization algorithms are not tied to the disclosed specific cost model and could be applied just as well with an alternative cost model. [0068] There are two costs associated with an index-intersection and union strategy: [0069] 1. Index-Retrieval Cost: This is the cost associated with retrieving the record identifier list that corresponds to a literal. This may require accessing the index pages on disk. This cost is assumed to be proportional to the number of record identifiers retrieved. Thus, a record identifier of length l would cost k [0070] 2. Hash-Merge Cost: Intersecting or unioning two (or more) record identifier lists is normally performed by a hash-merge operation. The cost of this operation is taken to be linear in the total number of record identifiers (RIDs) being intersected or unioned. Index intersection and union may also be performed by a sort-merge operation, which may also be assumed to have a cost linear in the total number of rows involved. Thus, given two record identifier lists of length l [0071] In one embodiment, data fetch cost is not included in the cost model, because this embodiment assumes that all indexed predicates are evaluated via the index intersection and union strategy. This assumption ensures that the data access cost is the same for all possible index intersection and union plans. This assumption is removed in the combined factoring, condition relaxation and index selection embodiment described below. [0072] Defining Quality of Literal Divisors [0073] The disclosed method defines the quality of literal divisors to facilitate a choice between two different divisors, and to facilitate determining whether factoring is, in fact, advantageous at all for a given expression. [0074] Consider an expression ax [0075] The gain from factorization can be approximated by:
[0076] where l [0077] is the resultant length of the union x [0078] is approximated to Σ
[0079] Thus, the gain formula becomes:
[0080] The gain formula above is a conservative estimate of the total savings. The factorization algorithm is general and does not depend on the exact cost model disclosed. [0081] Factorization Algorithm [0082] Algorithm 1, set out below and illustrated by the flow chart of FIG. 3 is one algorithm that can be used to get the best exact divisor D for a given function f. Algorithm 1 uses a defined gain function to evaluate the quality of a divisor D to return the Best Divisor. [0083] Algorithm 1: GetBestExactDivisor(f) [0084] 1: BestDivisor-NULL; [0085] 2: BestDivisorScore=0; [0086] 3: for all literal divisors D of f do [0087] 4: Compute gain G from factorizing f using D; [0088] 5: G>BestDivisorScore then [0089] 6: BestDivisorScore=G; [0090] 7: BestDivisor=D; [0091] 8: end if [0092] 9: end for [0093] 10: return Best Divisor; [0094] Referring to FIG. 3, Algorithm 1 evaluates all literal divisors D of the selection condition f. In one embodiment, only the divisors that are indexed are evaluated. Algorithm 1 computes [0095] Algorithm 2, set out below, is an exact factorization algorithm that is simply a greedy approach to factorization using literal divisors D returned by algorithm 1. [0096] Algorithm 2: ExactFactorize(f) [0097] 1: BestDivisor=GetBestExactDivisor(f); [0098] 2: IF BestDivisor≠NULL then; [0099] 3: Factorize f as f=BestDivisor.Q+R [0100] 4: return BestDivisor.ExactFactorize(Q)+ExactFactorize(R) [0101] 5: end if [0102] 6: //There are no useful divisors [0103] 7: return f; [0104] Algorithm 2 uses Algorithm 1 to get the best exact factor for a given selection condition f to produce a rewritten selection condition. In the exemplary embodiment, the rewritten selection condition is in the form BestDivisor.Quotient(Q)+Remainder(R). The algorithm uses the Best Divisor returned by algorithm 1 to factorize the selection condition. In one embodiment, the selection condition f is recursively factorized. [0105] In the exemplary embodiment, the space of literals over which the algorithm 1 searches for divisors consists of all literals appearing in at least two disjuncts. Algorithm 2 uses a hill-climbing strategy to arrive at a solution by a sequence of incremental improvements. Algorithm 2 can get stuck at local maxima. This is because the disclosed method uses an algorithm of quadratic complexity on a problem for which computing the optimal solution is infeasible even for relatively small expressions. This is not expected to be a major problem in practice. [0106] Algorithms 1 and 2 may be replaced by or modified to include a dynamic programming strategy that progressively identifies the gain in factorizing from any subset of the disjuncts, by first computing the gain, if any, for pairs of disjuncts, then for triplets of disjuncts, and so on. FIG. 4 is a flowchart that illustrates a dynamic strategy. All possible subsets of disjuncts of the selection condition are identified [0107] Algorithms 1 and 2 may also be extended in order to produce three special kinds of plans: single-index plans, index-intersection plans, and index-intersections-followed-by-unions(IIFU) plans. Referring to FIG. 5, in order to generate single-index and index-intersection plans, the method considers [0108] In order to produce index-intersections-followed-by-unions(IIFU) plans, the method modifies or uses a simplified version of Algorithms 1 and 2. FIG. 6 illustrates one method that can be used to generate index-intersections-followed-by-unions(IIFU) plans. In this method, the best divisor D is identified [0109] In the exemplary embodiment, the input to the algorithm is the original expression rewritten in the disjunctive normal form. By flattening out all original expressions into disjunctive normal form, the exemplary method exposes the most possible common factors. However, conversion of an arbitrary expression to disjunctive normal form could have an exponential cost in the worst case. Fortunately, this does not turn out to be a major problem. The conversion to disjunctive normal form is expensive only when converting an expression that is nearly in conjunctive normal form, and has very few repeating literals. In such a case, it is likely that the given expression is, itself, very close to being the best possible representation of the condition. [0110] Factorizing Selection Conditions that are not in Disjunctive Normal Format [0111] In one embodiment, instead of converting to disjunctive normal form and then re-factorizing the condition from scratch, the method could take the given expression as it is, and work bottom-up on the AND-OR tree in order to perform factorization. FIG. 7 illustrates a method for factorizing a selection condition without converting the selection condition to disjunctive normal form. The method illustrated by FIG. 7 factorizes the selection condition and returns an IIFU expression at the root. In the method illustrated by FIG. 7, the method determines [0112] When the lowest node of a selection condition that is not in DNF is an OR node, the method treats the subtree of each of the OR nodes as a separate DNF expression, which may be factorized using Algorithms 1 and 2. If the method seeks only index-intersection plans, the method only needs to consider index-intersection plans for these subtrees. If the method seeks seek IIFU plans, the method will maintain both the set of perfect factors that may be used to develop an index-intersection plan for the subtree, and the factorized form that may lead to an IIFU plan for the subtree. If the method looks for general IIU plans, the method will maintain the index-intersection set, and the factorized form that leads to the best IIU plan for the subtree. Once the method has identified these sets and factorized forms at the lowest OR nodes, the method looks at the AND nodes immediately above these OR nodes. (if there are no such AND nodes, it means there was only one OR node and the expression was in DNF. The solution is then available at the OR node.). At an AND node, the method attempts to synthesize the set of perfect factors and the best factored form for its subtree using the information maintained at its children. The set of perfect factors is obtained simply by unioning the sets of all its children. [0113] Identifying the best factored form at an AND node can be harder than identifying the best factored form at an OR node. If only IIFU plans are desired, the method could be configured to just choose the IIFU factored form of one of the children and evaluate the rest of the conditions as a filter. Note that there is no direct way of combining one IIFU plan with any other plan that involves index-intersections in order to produce another IIFU plan. One situation where the method can do better is when multiple IIFU factored forms have the same factors. For example, AB+CD is an IIFU factored form. AB+EF is another. Note that these two forms can be combined at an AND node to produce (AB+CD)(AB+EF). This form is not an IIFU form though, since it would require performing an index intersection at the end. The method could choose to retain one of the two forms, (AB+CD) or (AB+EF), and evaluate the other as a filter. But, in this case, since AB is actually present in both factored forms, the method can simplify the product expression (AB+CD)(AB+EF) as AB+CDEF which is an IIFU factored form, since it can be evaluated by a series of index intersections followed by one index union. The method uses efficient algorithms to detect such common factors across the factored forms and considers generating IIFU forms in this fashion. [0114] At an OR node, the best factored form for the subtree can be easily identified. The method simply combines IIFU factored forms of all the children to produce the IIFU form at the OR node, since the method is simply combining all the existing index-union operations into a single union. Identifying the set of perfect factors is achieved by treating each child as a conjunct of its set of perfect factors, and identifying the perfect factors of the consequent DNF expression. In the general case, the method can replace complex sub-expressions by new symbols so as to make the subtree's overall expression appear to be in DNF. The method then applies the standard factorization algorithm to the expression, taking care to suitably modify the cost functions to deal with complex subexpressions being factors. For example, if the two children of an OR node were (A+B)(CD+E) and (A+B)(EF+G), the method can logically replace A+B by X CD+E by Y, and EF+G by Z, treating the overall expression as a DNF expression XY+XZ. The method can then factorize this “DNF” expression into X(Y+Z) if the cost function determines that such a factorization is useful. [0115] Condition Relaxation via Predicate Merging [0116] In the embodiment disclosed above, exact factoring is performed to generate an efficient index intersection and union plan for a given selection condition. In one embodiment, the search space is expanded to generate better index intersection and union plans by relaxing the original condition. For example, an expression E may have the form XA+YB. The expression E′=(X+Y)(A+B) is a relaxation of the original expression E. This is easily verified by multiplying out the factors of E′ to get the original expression XA+YB plus extra terms XB+YA that can easily be filtered out at the end. E′ might be preferable to E if predicates X and Y overlap and enable more efficient evaluation of X+Y than evaluating X and Y individually: The factored expression E′=(X+Y)(A+B) can be used to find a relaxed set of record identifiers, which can then be filtered by an exact evaluation of the original expression E. As described above, there is a potential advantage when X and Y are identical (exact factoring). In one embodiment, the method explores the case where X and Y are not identical. [0117] In the exemplary embodiment, the space of possible merges to consider is limited to predicates which, when merged, will yield another atomic predicate that can be evaluated by a single index scan. Examples of such predicates are single-dimensional range predicates. For example, E [0118] Merging predicates as in the above example to enable better factorization is a tradeoff between reduced index-retrieval and index-intersection cost on the one hand and increased index-union cost on the other. In an expression of the form R [0119] In the exemplary embodiment, both relaxed factoring and exact factoring are performed in the same framework. The only difference is that, instead of choosing the best factor amongst predicates repeated exactly, the method expands the set to include merges of overlapping predicates. It is noted that all the algorithms presented above, including those that work with arbitrary AND-OR expressions rather than DNF, generalize directly to accommodate the presence of merged predicates. This disclosure presents an algorithm that can be used to find the best merged predicate. Recall that in the exemplary embodiment, merges are limited to only those atomic predicates that can be merged to another atomic predicate. A unified framework is used to evaluate the quality of a factor, whether exact or approximate. Thus, in the exemplary embodiment a merged predicate is treated as a factor, whose quality can be evaluated in the same manner and in the same units as an exact factor. The gain formula used to determine the gain of an exact predicate is modified to include merged predicates. [0120] Evaluating the Quality of a Merged Predicate [0121] a [0122] In one embodiment, the gain in factorizing the expression can then be estimated by the formula:
[0123] This formula reduces to disclosed gain formula (2) when all the a [0124] Algorithm for Choosing the Best Merged Predicate [0125] The disclosed method selects the set of predicates to merge so as to maximize the gain from factorizing the expression. The space of possibilities the method considers has 2 degrees of freedom: [0126] 1. The method can choose the dimension along which predicates should be merged. [0127] 2. The method can choose which set of predicates to relax along the chosen dimension. [0128] In one embodiment, the first degree of freedom can be handled by an exhaustive linear search. In this embodiment, the method identifies the best solution along each dimension and picks the best among these. There are an exponential number of possibilities associated with the second degree of freedom. In the exemplary embodiment, the exact predicates to relax are identified using the disclosed cost function, in polynomial time. [0129] This algorithm can also be applied in cases where the merge of two atomic predicates results in an atomic predicate that covers the union of the two predicates. For example, in one embodiment when the method merges two multi-dimensional range predicates, the method may produce a single multi-dimensional predicate that might be larger than their union. For such merge operations, the disclosed algorithm is not guaranteed to produce the optimal solution. Identifying the optimal solution in such cases is likely to be NP-hard. In one embodiment, the disclosed algorithm guarantees optimality when the intersection operator distributes over the merge operator applied to these predicates. [0130] The disclosed algorithm computes the set of predicates to be merged in an incremental fashion, while also maintaining the associated gain value G. The incremental computation of gain G is achieved by incrementally maintaining an auxiliary function F for all sets of predicates, and carefully choosing the initial values for these functions. The algorithm defines functions F and G, which are used as subroutines by Algorithm 3 BestMergedFactor(f) (defined below). [0131] For all predicates a occurring in the original expression as a conjunction ax, define F(a)=−l [0132] If two overlapping predicates, a [0133] Define G(a)-Max(F(a),0). [0134] It is noted that the gain function G(a) is computed in the same fashion as the value computed by the Gain function G disclosed for merged predicates. [0135] Algorithm 3, set forth below is one algorithm that may be used to select the best merged factor. [0136] Algorithm 3 SelectBestMergedFactor(f) [0137] 1: BestGain=0; CurrentAnswer={ }; [0138] 2: for all indexed columns c in f do [0139] 3: Let S=Set of all atomic ranges on Column c in f; [0140] 4: while true do [0141] 5: Select 2 overlapping predicates a [0142] 6: If no such a [0143] 7: Remove a [0144] 8: Add a to S; [0145] 9: end while [0146] 10: a′=Predicate in S such that for all p in S, G(a′)≧G(p) [0147] 11: if G(a′)>BestGain then [0148] 12: BestGain=G(a′); [0149] 13: CurrentAnswer=Set of ranges merged to obtain a′; [0150] 14: end if [0151] 15: end for [0152] 16: return CurrentAnswer; [0153] Referring to FIG. 8, Algorithm 3: SelectBestMergedFactor, starts out [0154] Combining Exact Factoring, Condition Relaxation and Index Selection [0155] This section of the disclosure discusses how the results of exact factoring and approximate factoring are used to obtain an optimized access path for single-relation selection. In one embodiment disclosed above, the exact and approximate factoring algorithms assume that all predicates in the boolean expression will be evaluated via an index scan. This is not always the best strategy in the presence of atomic predicates that are not selective. Choosing the right set of indices for a given boolean expression is a component of a query optimizer. However, most query optimizers do not attempt any factoring of repeated predicates. Hence, this disclosure discusses how the new factoring algorithms integrate with the existing index-selection algorithms. [0156] In one embodiment, the index selection is invoked either before or after the factorization steps. There is a need for an integrated approach, because either approach could be suboptimal. [0157] Index selection before factoring. Consider the expression AB+BC. A traditional optimizer might, based on the selectivity of the different predicates, choose the index on A for the AB disjunct and the index on B for the BC disjunct leaving the rest to be evaluated as a filter. If the indexed part of the expression A+B is submitted to the index intersection and union plan optimizer, the no common factor will be found. In contrast, if the factoring algorithm is invoked first and B(A+C) is found to be a gainful factor, the index selection step might choose an index scan on B followed by a filtered evaluation of A+C. [0158] Index selection after factoring A factorization algorithm may be dependent on the index selection decision. In the above example, if A+C produces large intermediate record identifiers, the gain of factoring out B as a divisor might become negative. The formula for computing gain in Equation 2 above assumes that A+C is calculated via an index scan. Thus gain calculated as G=(k [0159] The decision to use an index for a predicate scan is both influenced by and influences the factoring decision. In the exemplary embodiment, the exact and relaxed condition algorithms are modified to provide an integrated solution to the problem. [0160] In the exemplary embodiment, a subroutine, BestSelectionPlan, that when presented with a factorized expression finds the best access path after selecting the right subset of indices and exploits the factorization wherever possible. One such subroutine is disclosed by C. Mohan et al., Single table access using multiple indexes: optimization, execution, and concurrency control techniques, [0161] In the exemplary embodiment, the gain function is corrected that assumes that each literal in an expression will be evaluated via an index scan. A gain function that does not assume each literal expression will be evaluated via an index selection is disclosed below. [0162] The Modified Gain Function [0163] Considering a divisor D that can factorize its parent expression f as DQ+R. In the exemplary embodiment, the gain G(D) is defined as the difference between the following two costs: [0164] 1: c [0165] 2a: c [0166] In this embodiment, the gain is defined as G(D)=c [0167] The Combined Algorithm [0168] Algorithms 4 and 5 combines the exact and approximate factoring of expressions. In one embodiment, the selection of an optimized set of indexes by using the modified gain function that invokes the BestSelectionPlan subroutine. [0169] Algorithm 4 GenerateBestFactoredPlan(f) [0170] 1: p=GenerateBestFactoredForm(f); [0171] 2: Plan P=BestSelectionPlan(p); [0172] 3: if Flag APPROXIMATED=true then [0173] 4: Add a filter to the top of the plan (before data access) with appropriate filter condition. [0174] 5: end if [0175] Algorithm 5 GenerateBestFactoredForm(f) [0176] 1: e=GetBestExactFactor(f); //using new definition of gain. [0177] 2: m=GetBestMergedFactor(f); [0178] 3: If both e and m have gain≦0, return f; [0179] 4: If G(e)>G(m) [0180] 5: //The exact factorization is more beneficial [0181] 6: Factorize f as f=e.Q+R; [0182] 7: q=GenerateBestFactoredForm(Q); [0183] 8: r=GenerateBestFactoredForm(R); [0184] 9: p=e.q+r; [0185] 10: return p [0186] 11: else [0187] 12: //Approximate factorization is better [0188] 13: Obtain f′ from f by replacing all predicates belonging to S by m; [0189] 14: Do the same steps 6-9, but with f′ instead of f; [0190] 15: Set Flag APPROXIMATED=true; [0191] 16: //We eventually need to add a filter on top of the generated plan, to select only those tuples that satisfy f; [0192] 17: return p; [0193] 18: end if [0194] Referring to FIG. 9, algorithm 5 retrieves [0195] Algorithm 4 invokes algorithm 5 to generate a factored form of the selection condition. The selection plan P is retrieved by the subroutine BestSelectionPlan for the factorized selection plan returned by algorithm 5. If a merged factor was factored out of the selection condition, a filter condition is added to the top of the plan. The filter removes tuples that do not satisfy the original selection condition f, because predicates were replaced with a merged predicate. [0196] Examples of Experimental Evaluation of Disclosed Algorithms [0197] The following examples indicate that the algorithms improve performance for different query characteristics and the factoring algorithms are robust, i.e. optimization does not produce plans inferior to a plan generated by a traditional optimizer. A proof included in appendix A shows the optimality of the algorithm for merging predicates and justifies using literals as factors in the disclosed factoring algorithm. [0198] In all these experiments, a version of the factorization algorithm that produces arbitrary index-intersection and union plans, working from a DNF expression, is used. [0199] Experimental Setup [0200] Synthetic workloads were generated to measure the improvement in query plans for various query characteristics using the disclosed algorithms. The experiments were carried out on a 1 GB TPCH database (with skew 1) on a commercial DBMS. A single table was used that was an extension of the line item table, extended by foreign-key joins. There were indexes present on 6 of the columns, many of which were key fields. All queries were capable of being evaluated solely from indexes. [0201] In order to generate overlapping predicates, range predicates were used on different fields of the table. The queries were rewritten suitably in order to force the query optimizer to choose a certain execution plan. [0202] The cost of each of the query plans was measured in two different ways—estimated execution time, and the actual execution time averaged over multiple runs of the query. The results presented below are on estimated execution times. Also, each of the data points shown in the graphs of FIGS. [0203] Query Parameters [0204] The following are query characteristics that were controlled: [0205] 1. Query Structure: WHERE clauses were generated in DNF. So, query structure was determined by the total number of terms generated, and the distribution of the number of literals per term. [0206] 2. Predicate Selectivities: The selectivities of the predicates in the query were chosen either from a Gaussian distribution, or from a uniform distribution with a certain peak value. The selectivities of predicates in the generated queries were controlled using the statistics maintained by the database system. [0207] 3. Repeating Predicate: In many of the experiments, only one predicate was made to repeat (either exactly or approximately) in order to isolate the effect of a single factorization. The selectivity of the repeating predicate and the number of repetitions were controllable. [0208] 4. “The Fudge Factor”: This factor controlled the degree of overlap for the generation of overlapping predicates. If this factor is one, the overlapping predicates are identical. If the factor is zero, the overlap between the predicates is zero. Thus, the value of the fudge factor controls how much the overlapping factors intersect. [0209] The outputs generated were as follows: [0210] 1. The cost of the original query plan generated by the query optimizer. Most often, the original plan was to perform a linear scan of the clustered index. [0211] 2. The cost of the index intersection and union-plan on the original DNF of the WHERE clause. [0212] 3. The cost of the index intersection and union-plan on the factored form of the WHERE clause. [0213] The results on the performance improvement obtained for various query characteristics are presented below. Most of the graphs of FIGS. [0214] The Selectivity of the Repeating Predicate [0215]FIG. 10 shows the estimated execution times for a query whose WHERE clause had 6 terms with 2 literals/term. There was exactly one predicate that repeats, and it repeated exactly once. All the predicate selectivities were fixed at 0.01 except for the selectivity of the repeating predicate, which is varied from 0.01 to 0.1. The time savings became larger and larger as less selective predicates were factored out. There was nearly a 50% improvement in execution time when the repeating predicate selectivity is 0.1. Results for other predicate selectivity values and distributions are presented below. [0216]FIG. 11 shows the estimated times when a “fudge factor” of 0.5 was used to generate an “approximate” repeating predicate, and the selectivities of the rest of the predicates were set at 0.01. FIG. 11 shows that approximate factoring appears to produce nearly as good a time savings as exact factoring for this class of queries, and the payoff increases as the selectivity of the merged predicate increases. [0217] The plan selected by the DBMS was a linear scan in all these cases and had a cost of around 144, which happens to be about 6 times the cost of the worst plan generated by the disclosed algorithms. [0218]FIG. 12 shows the estimated execution times when all the selectivities are scaled together, i.e., the selectivity of the repeating predicate is made equal to that of the other predicates. Again, a consistent improvement in performance is realized, as in FIG. 11. [0219]FIG. 13 repeats the same experiment, but with a fudge factor of 0.5, and with the selectivity of the merged predicate being made identical to the selectivity of the rest of the predicates. Again, a considerable improvement in performance using approximate factoring is realized. [0220] Sensitivity to number of repetitions [0221] For the same query structure (6 terms, 2 literals/term), FIG. 14 shows the sensitivity to the number of repetitions of a predicate in the query when all predicate selectivities are set at 0.01. There is an almost super-linear improvement in performance with an increase in the repetition factor. [0222]FIG. 15 plots the same graph, but this time with all selectivities set to 0.1. FIG. 15 shows that the improvement is not confined to queries with highly selective predicates but is as marked even when we have unselective predicates. [0223] Sensitivity to the “Fudge Factor” [0224]FIG. 16 shows the sensitivity to the fudge factor used in generating approximately similar factors. As noted earlier, a fudge factor of 0 means that there is absolutely no intersection between predicates, while a fudge factor of 1 implies that all the repeated predicates are identical. FIG. 16 shows the effect of the fudge factor, with all predicates being of selectivity 0.01, and with two intersecting predicates whose union also has selectivity 0.01. [0225]FIG. 16 shows that the greater the intersection between the predicates (for the same size of the union of the predicates), the greater the improvement from factoring. The improvement varies from 3% with a “fudge factor” of 0 to 15% with a “fudge factor” $1$, which is pretty considerable for just one approximately repeating predicate whose selectivity is less than that of all other predicates in the expression. [0226] Robustness Evaluation [0227] Extensive experiments were conducted to evaluate the robustness of factoring, i.e., to check whether factoring provides non-zero improvement irrespective of the query characteristics, and to check whether the disclosed factoring algorithms ever make things worse. The set of query characteristics that were varied include: [0228] 1. selectivity of the non-factored predicates, [0229] 2. distribution function used for getting the selectivities of the different predicates (Uniform and standard deviation), [0230] 3. standard deviation of the Gaussian distribution, [0231] 4. number of literals, [0232] 5. selectivity of the merged predicate when doing condition relaxation [0233] For each of these parameters both the actual and estimated execution time for the factored (both exact and relaxed), unfactored and linear scan options were measured. The exercise established beyond doubt the robustness of the disclosed optimizations. FIGS. [0234] This disclosure addressed the problem of optimizing queries with complex selection conditions. The disclosed exact factoring and condition relaxation techniques transform expressions into simpler forms. This application discloses an efficient factorization algorithm. The algorithm is extended to factor in merged predicates obtained by relaxing two or more atomic predicates. A method for integrating these transforms with existing query optimizers' support for selecting indexes is disclosed. [0235] The experiments on a commercial database system establish that factorization can provide a considerable improvement over DNF for a large class of query characteristics. These experiments demonstrate some of the classes of queries where improvement is visible. The disclosed algorithms are robust and are more efficient than the normal index intersection and union strategy for nearly all queries. [0236] Although the present invention has been described with a degree of particularity, it is the intent that the invention include all modifications and alterations falling within the spirit or scope of the appended claims. [0237] Appendix A—Proof of Optimality of Approximate-Factoring Algorithm [0238] The approximate factoring algorithm is optimal so long as the overlap/intersection operator distributes over the merge operator, operating on atomic predicates. For example, single-dimensional range predicates satisfy this property. For simplicity, this proof assumes that the predicates to be merged are single-dimensional range predicates, with the understanding that the same proof holds for other types of overlap. [0239] Functions F and G on atomic predicates are defined above. This proof now defines F and G for sets of predicates to be the corresponding function evaluated on the merge of all elements of the set. [0240] Formally, for any atomic predicate R occurring in the original expression as a conjunction Rx: [0241] For any two overlapping predicates R [0242] The above formula also holds when R [0243] This proof also defines, for any set of predicates S, G(S)=max(F(S),0). [0244] At the end of Algorithm X+3, there is a partition of all the range predicates on a particular column. Let us call the sets in this partition S ∀ [0245] The algorithm then produces S [0246] In general, a “solution” is just a set of range predicates, all of which are to be merged together. The gain for unions of disjoint solutions is defined here as the sum of the gains of the individual solutions. The union of multiple disjoint solutions can be unambiguously represented by a single set, because there is only one way of partitioning a set into disjoint solutions. [0247] Lemma 1 If S is the solution generated as the output of Algorithm 3, G(S)>G(S′) for any S$ that is a subset of S. [0248] Proof This lemma is proved by induction on the number of merges performed to generate the final, merged range corresponding to S. Call this final range R. First, if R is obtained by merging two atomic predicates R [0249] In this case, S={R [0250] Now, let R be obtained by merging an atomic range R [0251] If S′ does not contain R _{1}} and, therefore, G(S−{R_{1}})≧G(S′). Also, since Algorithm 3 merged R_{1 }and R_{2}, it is known that
[0252] This proves that if R is obtained by merging an atomic predicate R [0253] Now, let R be obtained by merging two merged predicates R _{2}. It is then known that
[0254] This concludes the proof. [0255] Lemma 2 If S is the output generated by Algorithm 3, and S′ is the optimal solution, every pair of predicates in S′ are merged together into the same partition at some point in the execution of Algorithm 3 and are consequently in S. [0256] Proof This statement is proved by induction on the number of predicates in S′. As the base case, let the number of predicates in S′ be 2, and call the predicates R [0257] Now, let the partitions generated by Algorithm be S [0258] Therefore, let there be some two intersecting predicates R [0259] Therefore, at least one of S [0260] This completes the induction step. [0261] Theorem 3 Algorithm 3 generates the optimal set of predicates to be merged. [0262] Proof It is observed that G(S) accurately captures the utility of approximating all the factors in the set S. Together with this observation, Lemmas 1 and 2 directly imply that Algorithm 3 produces the optimal solution. Referenced by
Classifications
Legal Events
Rotate |