|Publication number||US7996388 B2|
|Application number||US 11/874,202|
|Publication date||Aug 9, 2011|
|Filing date||Oct 17, 2007|
|Priority date||Oct 17, 2007|
|Also published as||US20090106214|
|Publication number||11874202, 874202, US 7996388 B2, US 7996388B2, US-B2-7996388, US7996388 B2, US7996388B2|
|Inventors||Namit Jain, Anand Srinivasan, Shailendra Kumar Mishra|
|Original Assignee||Oracle International Corporation|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (41), Non-Patent Citations (116), Referenced by (26), Classifications (5), Legal Events (2)|
|External Links: USPTO, USPTO Assignment, Espacenet|
This application is related to and incorporates by reference herein in its entirety, a commonly-owned and concurrently-filed U.S. application Ser. No. 11/874,197, entitled “DYNAMICALLY SHARING A SUBTREE OF OPERATORS IN A DATA STREAM MANAGEMENT SYSTEM OPERATING ON EXISTING QUERIES” by Namit Jain et al.
It is well known in the art to process queries over continuous streams of data using one or more computer(s) that may be called a data stream management system (DSMS). Such a system may also be called an event processing system (EPS) or a continuous query (CQ) system, although in the following description of the current patent application, the term “data stream management system” or its abbreviation “DSMS” is used. DSMS systems typically receive from a user a textual representation of a query (called “continuous query”) that is to be applied to a stream of data. Data in the stream changes over time, in contrast to static data that is typically found stored in a database. Examples of data streams are: real time stock quotes, real time traffic monitoring on highways, and real time packet monitoring on a computer network such as the Internet.
As shown in
As noted above, one such system was built at Stanford University, in a project called the Standford Stream Data Management (STREAM) Project which is described in an article entitled “STREAM: The Stanford Data Stream Management System” by Arvind Arasu, Brian Babcock, Shivnath Babu, John Cieslewicz, Mayur Datar, Keith Ito, Rajeev Motwani, Utkarsh Srivastava, and Jennifer Widom published on the Internet in 2004. The just-described article is incorporated by reference herein in its entirety as background.
For more information on other such systems, see the following articles each of which is incorporated by reference herein in its entirety as background:
Continuous queries (also called “persistent” queries) are typically registered in a data stream management system (DSMS) prior to its operation on data streams. The continuous queries are typically expressed in a declarative language that can be parsed by the DSMS. One such language called “continuous query language” or CQL has been developed at Stanford University primarily based on the database query language SQL, by adding support for real-time features, e.g. adding data stream S as a new data type based on a series of (possibly infinite) time-stamped tuples. Each tuple s belongs to a common schema for entire data stream S and the time t is a monotonically non-decreasing sequence. Note that such a data stream can contain 0, 1 or more pairs each having the same (i.e. common) time stamp.
Stanford's CQL supports windows on streams (derived from SQL-99) based on another new data type called “relation”, defined as follows. A relation R is an unordered group of tuples at any time instant t which is denoted as R(t). The CQL relation differs from a relation of a standard relational database accessed using SQL, because traditional SQL's relation is simply a set (or bag) of tuples with no notion of time, whereas the CQL relation (or simply “relation”) is a time-varying group of tuples (e.g. the current number of vehicles in a given stretch of a particular highway). All stream-to-relation operators in Stanford's CQL are based on the concept of a sliding window over a stream: a window that at any point of time contains a historical snapshot of a finite portion of the stream. Syntactically, sliding window operators are specified in CQL using a window specification language, based on SQL-99.
For more information on Stanford University's CQL, see a paper by A. Arasu, S. Babu, and J. Widom entitled “The CQL Continuous Query Language: Semantic Foundation and Query Execution”, published as Technical Report 2003-67 by Stanford University, 2003 (also published in VLDB Journal, Volume 15, Issue 2, June 2006, at Pages 121-142). See also, another paper by A. Arasu, S. Babu, J. Widom, entitled “An Abstract Semantics and Concrete Language for Continuous Queries over Streams and Relations” in 9th Intl Workshop on Database programming languages, pages 1-11, September 2003. The two papers described in this paragraph are incorporated by reference herein in their entirety as background.
An example to illustrate continuous queries is shown in
Several DSMS of prior art, such as Stanford University's DSMS treat queries as fixed entities and treat event data as an unbounded collection of data elements. This approach has delivered results as they are computed in near real time. However, in most continuous query systems this prior art approach does not allow continuous queries to be added dynamically. One reason is that a query plan is computed at the time of registration of all queries, before such a prior art DSMS even begins operations on streams of event data.
Once queries have registered and such a prior art DSMS begins to process event data, the query plan cannot be changed, in prior art systems known to the current inventors. The current inventors recognize that adding queries can be done, for example by quiescing Stanford University's DSMS, adding the required queries and starting up the system again. However, the current inventors note that it gives rise to indeterminate scenarios e.g. if a DSMS is being quiesced, there is no defined checkpoint for data in a window for incomplete calls or for data of intermediate computation that has already been performed at the time the DSMS is quiesced.
In one prior art DSMS, even after it begins normal operation by executing a continuous query Q1, it is possible for a human (e.g. network operator) to register an “ad-hoc continuous query” Q2, for example to check on congestion in a network. Such a query Q2 may be written to find a fraction of traffic on a backbone link that is coming from a customer network. In highly-dynamic environments, a data stream management system (DSMS) is likely to see a constantly changing collection of queries and needs to react quickly to query changes without adversely affecting the processing of incoming time-stamped tuples (e.g. streams).
A computer is programmed in accordance with the invention to implement a data stream management system (DSMS) that receives a new continuous query (also called simply “new query”) during execution of one or more continuous queries that have been previously registered (also called “existing queries”). The new query is to be executed by the DSMS on a stream or a relation, which may or may not be executed upon by existing queries. The new query is received (e.g. from a user) during normal operation of the DSMS in an ad-hoc manner, in the midst of processing incoming streams of data by executing a number of existing queries based on a global plan.
Specifically, a computer is programmed in several embodiments of the invention to automatically modify the global plan on the fly, to accommodate both execution of the new query and also continuing execution of existing queries. A modified plan which results therefrom may include new operators and/or sharing of one or more operators that are currently being used in execution of existing queries. Accordingly, a computer in several embodiments of the invention compiles a new query, if possible by sharing one or more operators between the new query and one or more existing queries.
In such embodiments, when compilation of the new query is complete, any operators that were not previously scheduled for execution (i.e. newly coupled operators) are also scheduled for execution, thereby to alter the above-described processing to henceforth be based on the modified plan. In some embodiments, any operators that were previously scheduled continue to execute as per schedule, independent of addition of the new query. Depending on the embodiment, execution of existing queries is performed without any interruption, or with minimal interruption from coupling and scheduling of execution of new operators required to execute the new query.
Unlike Stanford University's DSMS described in the Background section above, a new query that is added in accordance with the invention is not pre-defined. Furthermore, unlike Stanford University's DSMS, a new query is received and executed without a DSMS of several embodiments of the invention being quiesced, i.e. while continuing to receive input streams and transmit output streams. Moreover, in many embodiments, the existing operators may transmit the current value of a relation to the newly added operator (for the new query) via the newly created queues. The new query of several embodiments shares execution structures as much as possible with the existing global plan, and a newly added operator's structure(s) are populated based on existing inputs.
Moreover, unlike the prior DSMSs of the type described in the Background section above, a DSMS in accordance with the invention supports addition of queries over relations in addition to streams. In several embodiments, there is no restriction on the type of the query being added. In some embodiments, the mechanism is also independent of the internal representation of the relation in the server although in certain embodiments an incremental representation of the relation is used. In embodiments where the absolute representation of the relation is used, the propagation mechanism as described herein continues to work without any changes.
None of the prior art known to the inventors of the current patent application discloses or suggests propagation of a current state of a relation to one or more operators which are to be used by a new query. Specifically, during operation of a DSMS in accordance with the invention, when an operator on a relation (called “relation operator”) is awakened, the relation operator first propagates a current state of the relation to any operator(s) that have been newly coupled thereto, for use in execution of the new query.
In some embodiments, the current state is propagated only to those operators that are newly coupled to an existing relation operator. The propagation is performed so that these newly coupled operators receive the current state information. The current state is not propagated to any operators that were already in existence (also called “existing operators”). After propagation of the current state, any new information received by the relation operator is processed and results therefrom are supplied to all operators coupled to the relation operator, including newly-coupled operators and any existing operators. In this manner, these embodiments continue to process input data streams, now using the modified plan.
Many embodiments of the invention use a DSMS whose continuous query language (CQL) natively supports certain standard SQL keywords, such as a SELECT command having a FROM clause and in addition also supports windowing functions required for stream and/or relation operations. Note that even though several keywords and/or syntax may be used identically in both SQL and CQL, the semantics are different for these two languages because SQL may be used to define queries on stored data in a database whereas CQL is used to define queries on transient data in a data stream.
A DSMS which includes a computer programmed as described in published literature about the Standford Stream Data Management (STREAM) Project is extended by programming it with certain software in several embodiments of the invention called a continuous query compiler, as discussed below. A continuous query compiler is implemented in accordance with the invention to receive and act on a new continuous query q in an ad-hoc manner, e.g. on the fly during normal operation of the DSMS on existing queries. Accordingly, such a DSMS in accordance with the invention is hereinafter referred to as an extended DSMS.
After receipt, new continuous query q is automatically compiled by continuous query compiler 210 (
For example, simultaneous with generation of output data stream 231 by execution of existing queries, continuous query compiler 210 parses new continuous query q to build an abstract syntax tree (AST), followed by building a tree of operators. Such a tree of operators typically includes, one or more operators (also called “source operators”) that act as source(s) of tuples based on incoming data stream(s) 250 (
In addition to source operators (which are typically but not necessarily located at leaf nodes), the tree of operators includes one or more operators at intermediate nodes (called “query processing operators”) that receive data streams from the source operators, and a single root node which includes an output operator to output the results of processing the query. The tree of operators is typically included in a logical plan which does not reference any physical structures. In creating the logical plan, any semantic errors are flagged (e.g. any type mismatches and/or references to non-existent sources of data streams). The nodes of a tree in the logical plan are typically logical operators that are supported by the continuous query language (CQL), such as SELECT and JOIN.
Several embodiments then create for that same new query q various physical operators and related resources, such as memory for a queue, needed to execute the query. Physical operators accept data of streams and/or relations as input and generate as output data of streams and/or relations. In this process, if the new continuous query q uses an operator already existing in a global plan located in memory 290 that is currently being executed (also called “executing plan”) by query execution engine 230 on incoming stream(s) 250, then continuous query compiler 210 does not create a new physical operator. Instead, continuous query compiler 210 just modifies the executing plan in memory 290.
An executing plan which is currently being used by DSMS 200 contains physical resources of all operators for all queries currently being executed. When a new query is received for execution in act 301, then as per act 308 in
Then, as per act 309, continuous query compiler 210 alters the processing of incoming data streams 250 by query execution engine 230. After being altered, query execution engine 230 continues its processing of incoming data streams 250 by executing thereon not only the existing queries but also the new query. In some embodiments, a scheduler is invoked to allocate time slot(s) for any new operator(s) of the new query that are referenced in the modified plan that results from modification in act 308. Execution of the modified plan eventually results in execution of the new continuous query at an appropriate time (depending on when its operators are scheduled for execution), in addition to execution of existing queries. Some embodiments of the invention use a lock (e.g. a reentrant read write lock), to serialize updating of an operator by compiler 210 and its execution.
In some embodiments, any output(s) that is/are newly added to a relation operator is/are identified in the modified plan as such (e.g. flagged as requiring initialization), to support propagation thereto of the relation's current state, either before or at least when the relation operator is next awakened. After state propagation, the relation operator continues may process an incoming stream of data about a relation. Specifically, the processing continues wherever the relation operator had left off, when a prior time slot ended. As noted above, a scheduler allocates time slots in which the relation operator executes. On being awakened, the relation operator of some embodiments first propagates any new information on the relation that is received by the relation operator. Results of processing the new information is/are thereafter made available for reading at all outputs of the relation operator (including the newly added output).
Act 301 and portions of 308 (e.g. query parsing and logical tree construction) may be performed by continuous query compiler 210 of extended DSMS 200 in a manner similar or identical to a normal DSMS, unless described otherwise herein. Extended DSMS 200 of some embodiments accounts for the fact that new continuous queries can be added at any time during operation of extended DSMS 200 (e.g. while executing previously registered continuous queries), by any operator A checking (e.g. on being awakened in act 310) if the output of the operator A is a stream (as per act 311 in
Also, awakening of operators in an executing plan and propagation of a relation's state can be performed in any order relative to one another depending on the embodiment. For example, although act 310 (to awaken an operator) is shown as being performed before act 313 in
Accordingly, during registration of each new continuous query, the scheduler allocates a time slice for execution of each new operator therein. In several embodiments, the scheduler operates without interrupting one or more operators that are being executed in a current time slice. Hence, in some embodiments, the processing of existing queries is altered to permit processing of the new query thereby to effect a switchover from a currently executing plan to a modified executing plan. In an illustrative embodiment, altering of normal processing is performed at the end of a current time slice, with no delay (i.e. not noticeable in output stream 231 in
Accordingly, after registration of a new continuous query as described above, the extended DSMS continues to perform processing of input data streams 250 in the normal manner but now using the new query in addition to existing queries, i.e. based on the modified plan. Hence, output data streams 231 that were being generated by execution of existing queries continue to be generated without interruption, but are supplemented after the altering of processing, by one or more data streams from an output operator of the new continuous query, i.e. by execution of the new continuous query.
Depending on the embodiment, an unmodified plan (i.e. a global plan prior to modification) may be originally created, prior to receipt of the new continuous query, by merging of several physical plans for corresponding queries that are currently being executed. The specific methods being used in merging can be different, depending on the embodiment. In some embodiments, a new physical plan is merged into an unmodified plan by sharing just event source operators therebetween, as discussed below in reference to
Information about a relation that is supplied by link 244 is typically held in a store 280 in extended DSMS 200. Store 280 is typically multi-ported in order to enable multiple readers to access information stored therein. Store 280 may be used to store a relation R's information such as a current state R(t). In certain embodiments relation R is represented in an incremental manner, by tuples that are time stamped, and represent requests for incremental changes to the relation's initial state R(0). An example of a relation that may be represented in this manner is the number of chairs in a conference room. However, other embodiments do not use tuples, and instead maintain in memory an image of the relation's current state R(t), and this image is changed dynamically as relation R changes over time. An example of a relation that may be represented by an image is a Range Window operator on a stream, e.g. if window depth is 10, then such an image holds just 10 tuples.
In a conference room example for a relation operator described above, current state is propagated based on the relation's incremental representation. Specifically, in this example, the number of chairs in a given conference room is one integer, which changes whenever a new chair gets added or removed from the conference room. Instead of propagating all the number of chairs from the beginning of time, which might be a very large stream, we will only propagate the current value of the number of chairs (which is just 1 integer or 1 long depending on the storage type used for the number of chairs). So, at the point at which a new query is added, one illustrative DSMS embodiment simply propagates the value of the relation at that point in time, which depends on the changes from the beginning in time, but may not contain all the changes (because an item which got added and then removed subsequently does not count).
In embodiments that use tuples to represent a relation, tuples are typically received in extended DSMS 200 in the form of a data stream, e.g. carried by a communication link 242 from a user as shown in
The just-described stream representation of a relation in some embodiments, by time stamped tuples, is also referred to herein as an incremental representation. Although the incremental representation of a relation uses streams (i.e. Istream and Dstream), note that the relation's state is relatively static (e.g. relative to data stream 250). Hence, in practice, streams Istream and Dstream for a relation are several orders of magnitude smaller (in the rate of information flowing therein) than streams normally processed by extended DSMS 200. Use of Istream and Dstream to represent such a static relation enables several embodiments to process all information in extended DSMS 200 using a single data type, namely the stream data type. In contrast, as noted above, certain alternative embodiments of the invention store a relation's current state information in a non-incremental representation and hence use both data types.
Embodiments that use an incremental representation of a relation may implement the act of propagating the relation's state by reading the relation's initial state and all subsequent tuples from relational store 280 as illustrated in act 313 of
Moreover, each of the multiple outputs of the queue identifies any tuple references in the queue that have not yet been read by its respectively coupled reader. A tuple reference remains in the queue until readers coupled to all outputs of the queue have read the tuple reference, at which time the tuple reference is deleted from the queue. The tuple references are typically arranged in order of receipt relative to one another. A newly added output of the queue may identify to its newly-added reader one or more tuple references that have been already read by other readers coupled to other outputs of the queue. The just-described already-read tuple references may be added to the queue during propagation of current state of a relation, e.g. to initialize the newly added output.
Furthermore, in these embodiments, the current state of the relation is maintained in store 280. Note that it is only applicable to operators whose output is a relation. Also a data structure (e.g. a bit map) is maintained to denote the newly coupled operators. Accordingly, execution of a new continuous query in such embodiments begins with each relation's current state being propagated to the newly coupled operators. In these embodiments, execution of the new continuous query on streams (in contrast to relations) does not use any current state (since there is none) and instead uses new tuples that are time stamped after the current time (at which time execution resumes).
In some embodiments, a multi-reader queue of the type described above enables propagation (by reading) of a relation's state selectively to only certain operators that are being used in a new continuous query which did not previously read this information. Such selectivity avoids propagation of past tuples multiple times, to operators of existing queries. More specifically, the queue of certain embodiments supports marking by each operator of tuples in a relational store as being available to be read only by individually identified outputs of the queue that have been newly added, for execution of the new continuous query.
The above-described queue may be implemented in any manner well known in the art, although certain embodiments of the invention use the following implementation. The queue does not itself contain any tuples and instead it contains references to a store (which may be a relational store or a window store) in which the tuples are stored. Each output (and hence reader) of the queue has a read pointer which is advanced when a tuple for that output is read from the store. The queue initially holds references to all tuples that are received, until a tuple is read by all readers of the queue, at which time that tuple's reference is automatically deleted from the queue. For example, if a 1st continuous query is received at time 100 and a 2nd continuous query is received at time 300, and if a tuple of a stream used by both queries came in at time 175 and its negative came in at time 275, then the 2nd query never sees this tuple, although references to the tuple and its negative are both seen by the 1st query. A negative of a tuple typically represents a request to delete information inserted by the tuple, which is an incremental change as discussed in paragraph .
Depending on the embodiment, even when a tuple's reference is deleted from a queue, that particular tuple itself may still exist in the underlying store, for example for use by another queue. The store is implemented in such embodiments with the semantics of a bag of tuples that are written by the queue. These tuples are read by multiple readers of a queue that have been added as subscribers to the store, and each reader may individually dequeue a given tuple's reference, from that reader's view of the queue, after reading the given tuple from the queue. In such embodiments, the queue has only one writer, to write each tuple just once into the store, on receipt of the tuple by extended DSMS 200 from an outside stream (e.g. from a user).
In several embodiments, a store is created for and owned by a physical operator (such as a range window operator on a stream) that is used in a continuous query (hereinafter “1st continuous query”). Hence, store is automatically shared when the same physical operator is also used in a 2nd continuous query which is added subsequent to start of execution of the 1st continuous query. In some embodiments, only operators that are sources of data for the 2nd continuous query (typically, but not necessarily, leaf node operators) are shared. In such embodiments, the only requirement to share operators is that they have an identical name (of a relation or stream) that is being sourced therefrom.
Depending on the embodiment, a physical operator for the 1st continuous query may read data from a relation's store or from a store of a window on a stream, using a queue which may be same as or different from the queue used by the same physical operator when executed for the 2nd continuous query. A single physical operator that is used in execution of different queries may itself use a single queue to support multiple readers in the different queues of some embodiments, although in other embodiments different queues are used by the same physical operator in different queries.
For example, assume that a store (hereinafter “window store”) for a stream operator of an illustrative embodiment holds stream tuples A, B, C and D (also called messages A, B, C and D). If tuple A has been read by the 1st continuous query from the window store, then tuple A is dequeued from the 1st queue but the same tuple A remains in the window store until a later point in time when tuple A is dequeued by the 2nd queue. In this embodiment, tuple A is not deleted from the window store until tuple A has been read by all subscribers that read from the window store, at which time it is automatically deleted.
In the just-described example, after tuple A has been deleted from the window store, if a 3rd queue has a new reader that now subscribes to the window store, then the 3rd queue may once again insert the same tuple A into the window store, but at this stage the re-inserted tuple A is not available to the 1st queue and to the 2nd queue (both of which have already read tuple A). This is because messages being inserted for the 3rd queue are directed only to its reader (i.e. 3rd queue's reader), and not to the readers of the 1st queue and the 2nd queue.
Propagation to new outputs (see act 313 in
In some embodiments, only bottom-most operators in an execution tree are shared among queries as described herein, namely operators at level L=0, which directly receive tuples of event data in extended DSMS 200 from outside. Such operators do not have any other inputs, and hence they can be shared between different queries as long as the operators have the same name, e.g. if the operators represent the same relation. Alternative embodiments of the invention check if operators at higher levels can be shared. Specifically some alternative embodiments check if operators at level L>0, e.g. if a Join operator used for executing existing queries can be shared with a new continuous query. Such alternative embodiments may check if a subtree rooted at p can be implemented by a subgraph in the currently executing plan.
During the propagation of entire state of a relation in act 313, all tuples with a current time stamp are propagated, including both insert requests and delete requests, in embodiments that use these form of tuples as described above, in paragraph . Hence, it will be apparent to the skilled artisan, from this disclosure that the extended DSMS 200 thereafter behaves as if the new continuous queries were always present (relative to the relation). Such behavior enables the extended DSMS 200 to execute the new continuous query in a manner consistent with its execution of one or more existing continuous queries. Hence, if a new continuous query happens to be identical to a existing continuous query, identical streams are thereafter produced, as outputs thereof.
Next, a new tuple of the relation is propagated (as per act 314), to all outputs of the corresponding operator (i.e. to new outputs as well as pre-existing outputs of the relation operator). The new tuple of a relation may be generated in any manner, depending on the embodiment. For example, the new tuple may arise from changes to a relation that are identified by a user, via a communication link 242 into store 280 of extended DSMS 200 (
Depending on the embodiment, the extended DSMS 200 may perform act 313 at any time before act 314, after execution resumes with the modified executing plan. In some embodiments, act 313 is performed at whatever time the relation operator that is being shared (between one or more existing queries and one or more new continuous queries) is scheduled to be executed next. In several embodiments, extended DSMS 200 schedules operators on a round-robin basis, although other scheduling mechanisms may also be used in accordance with the invention, depending on the embodiment.
In certain alternative embodiments, act 313 (
Note that although a procedure for propagating previously-received information to an operator's newly added outputs has been described above in the context of sourcing tuples of a relation, the same procedure may also be used in some embodiments by an operator that sources tuples of a view relation operator (i.e. an operator that sources the information to implement a view on top of a relation). In this context, a view of extended DSMS 200 has the same semantics as a view in a prior art database management system (DMS).
Operation of extended DSMS 200 of some embodiments is further described now, in the context of an illustrative example shown in
On receiving the above-described continuous query, extended DSMS 200 creates a query object illustrated in
Also at time 500.5, assume a new query Q2 is registered for execution in the extended DSMS 200, e.g. by the user typing in the following text in a command line interpreter:
Accordingly, relation operator R can be shared, in a modified plan for execution of both queries Q1 and Q2 as illustrated in
Subsequently, the relation operator R supplies any new tuple at time 501 (see
In some embodiments, a computer of extended DSMS 200 is programmed to perform the three methods illustrated in
In act 501, the level L is set to zero, after which time the computer enters a loop between act 502 (which initializes a current operator Oi to a source operator at level L) and act 507 (which increments the level unless root is reached in which case control transfers to act 508, indicative that the first pass has been completed). In the just-described loop of
After act 502 (
Next, in act 505, the computer saves a pointer to the physical operator that was created in act 504 or alternatively found to exist in act 503. After saving the pointer, the computer goes to act 506 to increment operator Oi to the next operator in current level L and transfer control to act 503, unless there are no more unvisited operators in level L in which case control transfers to act 507 (discussed in the previous paragraph), after which the first pass is completed.
Next, a second pass is begun by the computer as illustrated in
During operation 520 to instantiate an operator, the computer of some embodiments may be programmed to perform a number of acts 521-526 as discussed next, although in other embodiments this operation 520 may be performed by other acts that will be apparent to the skilled artisan in view of this disclosure. Specifically, in act 521, the computer creates an output queue, unless this operator Oi is an output operator. Next, in act 522 the computer adds operator Oi as reader of output queues, of operators that supply input to operator Oi. In act 522, memory is allocated in some embodiments, to hold one or more pointers that are used to implement the reader. Thereafter, in act 523, the computer checks if operator Oi's inputs result evaluates to a stream. If result in act 524 is not a stream, then the computer gets the input operator Oi and invokes a function to take note (by setting a flag, also called propagation-needed flag) of the need to propagate the current state thereof, as per act 524, followed by transfer of control to act 525. If the result in act 523 is a stream, then act 524 is skipped and the computer directly transfers control to act 525. In some embodiments, whenever the execution operator is next invoked by the scheduler, its state is propagated if the just-described propagation-needed flag is set.
In act 525, the computer checks if operator Oi's input operator's store(s) can be shared by Oi. If the store(s) cannot be shared, then the computer allocates memory for a store to hold event data being output by operator Oi (as per act 526), followed by transferring control to act 527. Note that control also transfers to act 527 if an answer in act 525 is yes. In act 527, the computer saves a pointer to output store in Oi, and then adds (as per act 528) operator Oi as a reader of the output stores of the input operators of Oi. In act 528, additional memory is allocated in some embodiments, to hold one or more pointers that are used to implement the reader. This completes operation 520. Thereafter, in act 517 of
Next, a third pass is begun by the computer as illustrated in
After act 532 (
In act 536 of
In act 533 (
Note that the extended data stream management system 200 may be implemented in some embodiments by use of a computer (e.g. an IBM PC) or workstation (e.g. Sun Ultra 20) that is programmed with an application server, of the type available from Oracle Corporation of Redwood Shores, Calif. Such a computer can be implemented by use of hardware that forms a computer system 600 as illustrated in
Computer system 600 also includes a main memory 606, such as a random access memory (RAM) or other dynamic storage device, coupled to bus 602 for storing information and instructions to be executed by processor 604. Main memory 606 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 604. Computer system 600 further includes a read only memory (ROM) 608 or other static storage device coupled to bus 602 for storing static information and instructions for processor 604. A storage device 610, such as a magnetic disk or optical disk, is provided and coupled to bus 602 for storing information and instructions.
Computer system 600 may be coupled via bus 602 to a display 612, such as a cathode ray tube (CRT), for displaying to a computer user, any information related to DSMS 200 such as a data stream 231 that is being output by computer system 600. An example of data stream 231 is a continuous display of stock quotes, e.g. in a horizontal stripe at the bottom of display 612. An input device 614, including alphanumeric and other keys, is coupled to bus 602 for communicating information and command selections to processor 604. Another type of user input device is cursor control 616, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 604 and for controlling cursor movement on display 612. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
As described elsewhere herein, incrementing of multi-session counters, shared compilation for multiple sessions, and execution of compiled code from shared memory are performed by computer system 600 in response to processor 604 executing instructions programmed to perform the above-described acts and contained in main memory 606. Such instructions may be read into main memory 606 from another computer-readable medium, such as storage device 610. Execution of instructions contained in main memory 606 causes processor 604 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement an embodiment of the type illustrated in
The term “computer-readable medium” as used herein refers to any medium that participates in providing instructions to processor 604 for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 610. Volatile media includes dynamic memory, such as main memory 606. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 602. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read.
Various forms of computer readable media may be involved in carrying the above-described instructions to processor 604 to implement an embodiment of the type illustrated in
Computer system 600 also includes a communication interface 618 coupled to bus 602. Communication interface 618 provides a two-way data communication coupling to a network link 620 that is connected to a local network 622. Local network 622 may interconnect multiple computers (as described above). For example, communication interface 618 may be an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 618 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface 618 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
Network link 620 typically provides data communication through one or more networks to other data devices. For example, network link 620 may provide a connection through local network 622 to a host computer 624 or to data equipment operated by an Internet Service Provider (ISP) 626. ISP 626 in turn provides data communication services through the world wide packet data communication network 628 now commonly referred to as the “Internet”. Local network 622 and network 628 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 620 and through communication interface 618, which carry the digital data to and from computer system 600, are exemplary forms of carrier waves transporting the information.
Computer system 600 can send messages and receive data, including program code, through the network(s), network link 620 and communication interface 618. In the Internet example, a server 530 might transmit a code bundle through Internet 628, ISP 626, local network 622 and communication interface 618. In accordance with the invention, one such downloaded set of instructions implements an embodiment of the type illustrated in
Other than changes of the type described above, the data stream management system (DSMS) of several embodiments of the current invention operates in a manner similar or identical to Stanford University's DSMS. Hence, the relation operator in such a computer propagates any new tuples that have a new time stamp to all query operators coupled thereto, including the newly coupled query operator. In this manner, a computer that is programmed in accordance with the invention to receive and execute new continuous queries while continuing to operate on existing continuous queries, without prior art issues that otherwise arise from updating relation operators during modification of an executing plan.
In some embodiments, the DSMS uses a logical plan for each query, in addition to a global physical plan and a global execution plan for all the queries registered in the system. In these embodiments, in the global physical plan, the operators are linked with each other directly, whereas in the global execution plan, they are completely independent of each other, and are indirectly linked with each other via queues. Moreover, in several such embodiments, a global physical plan contains physical operators, whereas a global execution plan contain execution operators. As noted above also, the physical operators of certain embodiments are directly linked with each other, whereas the execution operators are not. Physical operators of many embodiments contain the compile-time information, whereas the execution operators contain the run-time information and are scheduled by the scheduler. The compiler of certain embodiments uses the physical plan for all the optimizations (merging, sharing, the type of store to be used etc.) and then the corresponding execution operators are created.
Numerous modifications and adaptations of the embodiments described herein will be apparent to the skilled artisan in view of this current disclosure. Accordingly numerous such modifications and adaptations are encompassed by the attached claims.
Following Subsections A and B are integral portions of the current patent application and are incorporated by reference herein in their entirety. Subsection A describes one illustrative embodiment in accordance with the invention. Subsection B describes pseudo-code that is implemented by the embodiment illustrated in Subsection A.
Subsection A (of Detailed Description)
A method performed in some embodiments is illustrated in the following pseudo-code.
1. Registering a Query Q with the System
a. This is done as per act 301 in
b. At this point, the query text is parsed, semantic analysis is done (and if there are no user errors in the query specification) the logical plan is computed, the physical plan is computed and the physical plan is also optimized. This is done as illustrated by acts 301A-518 (spanning
c. After completion of semantic analysis, the list of from clause entities are visited to determine if this query has any direct dependencies on views. For each of the views that this query directly depends on, the query associated with the view is obtained and is stored in the Query object as the set of query dependencies; This is done as part of act 301A in
d. A Query object is created and it stores the root of the optimized physical plan for the query. Note that the root of this plan is not the Output operator.
e. As part of the physical plan computation, sharing of the common (with other queries) base tables and views is also achieved. View sharing involves “pointing” to the view root operator that is “above” the root operator for the query associated with the view. For base table and view, sources that are referenced for the first time by this query (i.e. no other registered query in the system references these base tables/views), a Stream Source operator and a View Root operator are created and stored in a global array of source operators maintained in the DSMS. This is illustrated in act 504.
2. Destinations for the Query Q are Specified.
a. A physical layer Output operator is created. This results in the creation of the Output operator and its association with the Input/Output driver corresponding to the specified destination. The instance of the Output operator created is returned. See act 504
b. The returned Output operator is added to a list of outputs for the query and stored inside the Query object. See act 505
c. At this point, the query Q is checked if it has already been started
d. If no (as in this case), then nothing else needs to be done
3. The Query Q is Started for Execution
a. If the query has already been started, then do nothing and return
b. Else, recursively, execution operators are created recursively for the operators—see
c. The state of the query is set to STARTED, so that it doesn't get started again. Note that this state is checked in 2 (c) above.
Subsection B (of Detailed Description)
A method performed in some embodiments is illustrated in the following pseudo-code.
In one implementation, the internal representation of a relation is an incremental representation. When a new query Q is being admitted into an already running system (dynamic query addition), the following scenario may be encountered. There could be a newly created execution operator p (newly created and private to the current query Q) one of whose inputs is an operator c that is being shared and is already part of the running system when query Q is being admitted into the system.
If operator c evaluates to a relation, then the operator c first propagates its current relation state to the newly created operator p (which is coupled to an output of c) (via the queue connecting operators c and p), before sending any further data on the relation. This is because an incremental representation is used for relations and this implies that a starting snapshot (i.e. an initial state) is required on top of which subsequent incremental data have to be applied, to determine the state of a relation at any point in time (the state of the relation input from c, for the operator p).
Thus, to support dynamic query addition, several embodiments identify existing operators that need to propagate their relation's state, and also identify the newly created operators to which they should be propagating that state. Some embodiments identify existing operators that need to propagate their relation's current state, and also identify for each such existing operator the queue reader identities (“ids”) corresponding to the newly created operators to which state is to be propagated.
Following describes a general approach used in various embodiments on top of any operator sharing algorithm (OSA for short) which has the following property called “OSA Subgraph Property”: If OSA determines that an operator corresponding to p can be shared from the existing global execution plan, then OSA determines that the subtree rooted at p can be implemented by a subgraph of the existing global execution plan.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US4996687 *||Oct 11, 1988||Feb 26, 1991||Honeywell Inc.||Fault recovery mechanism, transparent to digital system function|
|US5495600||Jun 3, 1992||Feb 27, 1996||Xerox Corporation||Conversion of queries to monotonically increasing incremental form to continuously query a append only database|
|US5822750 *||Jun 30, 1997||Oct 13, 1998||International Business Machines Corporation||Optimization of correlated SQL queries in a relational database management system|
|US5826077||Apr 17, 1997||Oct 20, 1998||Texas Instruments Incorporated||Apparatus and method for adding an associative query capability to a programming language|
|US5857182||Jan 21, 1997||Jan 5, 1999||International Business Machines Corporation||Database management system, method and program for supporting the mutation of a composite object without read/write and write/write conflicts|
|US6263332||Aug 14, 1998||Jul 17, 2001||Vignette Corporation||System and method for query processing of structured documents|
|US6546381||Oct 4, 1999||Apr 8, 2003||International Business Machines Corporation||Query optimization system and method|
|US6836778||May 1, 2003||Dec 28, 2004||Oracle International Corporation||Techniques for changing XML content in a relational database|
|US6985904||Feb 28, 2002||Jan 10, 2006||Oracle International Corporation||Systems and methods for sharing of execution plans for similar database statements|
|US7310638 *||Oct 6, 2004||Dec 18, 2007||Metra Tech||Method and apparatus for efficiently processing queries in a streaming transaction processing system|
|US7383253||Dec 17, 2004||Jun 3, 2008||Coral 8, Inc.||Publish and subscribe capable continuous query processor for real-time data streams|
|US7403959||Aug 3, 2005||Jul 22, 2008||Hitachi, Ltd.||Query processing method for stream data processing systems|
|US7673065||Oct 20, 2007||Mar 2, 2010||Oracle International Corporation||Support for sharing computation between aggregations in a data stream management system|
|US20040064466||May 1, 2003||Apr 1, 2004||Oracle International Corporation||Techniques for rewriting XML queries directed to relational database constructs|
|US20040220912||May 1, 2003||Nov 4, 2004||Oracle International Corporation||Techniques for changing xml content in a relational database|
|US20040220927||May 1, 2003||Nov 4, 2004||Oracle International Corporation||Techniques for retaining hierarchical information in mapping between XML documents and relational data|
|US20040267760||Jun 23, 2003||Dec 30, 2004||Brundage Michael L.||Query intermediate language method and system|
|US20050055338||Sep 5, 2003||Mar 10, 2005||Oracle International Corporation||Method and mechanism for handling arbitrarily-sized XML in SQL operator tree|
|US20050065949||Nov 8, 2004||Mar 24, 2005||Warner James W.||Techniques for partial rewrite of XPath queries in a relational database|
|US20050229158||Sep 16, 2004||Oct 13, 2005||Ashish Thusoo||Efficient query processing of XML data using XML index|
|US20050289125||Sep 22, 2004||Dec 29, 2005||Oracle International Corporation||Efficient evaluation of queries using translation|
|US20060031204||Sep 22, 2004||Feb 9, 2006||Oracle International Corporation||Processing queries against one or more markup language sources|
|US20060100969||Nov 8, 2004||May 11, 2006||Min Wang||Learning-based method for estimating cost and statistics of complex operators in continuous queries|
|US20060230029||Apr 7, 2005||Oct 12, 2006||Weipeng Yan||Real-time, computer-generated modifications to an online advertising program|
|US20060235840||Sep 27, 2005||Oct 19, 2006||Anand Manikutty||Optimization of queries over XML views that are based on union all operators|
|US20070022092||Feb 23, 2006||Jan 25, 2007||Hitachi Ltd.||Stream data processing system and stream data processing method|
|US20070136254||Nov 8, 2006||Jun 14, 2007||Hyun-Hwa Choi||System and method for processing integrated queries against input data stream and data stored in database using trigger|
|US20070294217||Mar 27, 2007||Dec 20, 2007||Nec Laboratories America, Inc.||Safety guarantee of continuous join queries over punctuated data streams|
|US20080028095||Jul 27, 2006||Jan 31, 2008||International Business Machines Corporation||Maximization of sustained throughput of distributed continuous queries|
|US20080046401||Aug 14, 2007||Feb 21, 2008||Myung-Cheol Lee||System and method for processing continuous integrated queries on both data stream and stored data using user-defined share trigger|
|US20080114787||Jan 29, 2007||May 15, 2008||Hitachi, Ltd.||Index processing method and computer systems|
|US20080301124||Mar 6, 2008||Dec 4, 2008||Bea Systems, Inc.||Event processing query language including retain clause|
|US20090043729||Aug 9, 2007||Feb 12, 2009||Zhen Liu||Processing Overlapping Continuous Queries|
|US20090070786||Jun 4, 2008||Mar 12, 2009||Bea Systems, Inc.||Xml-based event processing networks for event server|
|US20090106189||Oct 17, 2007||Apr 23, 2009||Oracle International Corporation||Dynamically Sharing A Subtree Of Operators In A Data Stream Management System Operating On Existing Queries|
|US20090106190||Oct 18, 2007||Apr 23, 2009||Oracle International Corporation||Support For User Defined Functions In A Data Stream Management System|
|US20090106198||Oct 20, 2007||Apr 23, 2009||Oracle International Corporation||Support for sharing computation between aggregations in a data stream management system|
|US20090106214||Oct 17, 2007||Apr 23, 2009||Oracle International Corporation||Adding new continuous queries to a data stream management system operating on existing queries|
|US20090106215||Oct 18, 2007||Apr 23, 2009||Oracle International Corporation||Deleting a continuous query from a data stream management system continuing to operate on other queries|
|US20090106440||Oct 20, 2007||Apr 23, 2009||Oracle International Corporation||Support for incrementally processing user defined aggregations in a data stream management system|
|US20090248749||Jun 4, 2009||Oct 1, 2009||International Business Machines Corporation||System and Method for Scalable Processing of Multi-Way Data Stream Correlations|
|1||Advisory Action dated Aug. 18, 2009 in U.S. Appl. No. 11/601,415; 3 pages.|
|2||Amendment after Notice of Allowance dated Dec. 5, 2009 in U.S. Appl. No. 11/977,440.|
|3||Amendment after Notice of Allowance dated Feb. 24, 2010 in U.S. Appl. No. 11/874,850.|
|4||Amendment dated Apr. 8, 2010 in U.S. Appl. No. 11/874,896.|
|5||Amendment dated Feb. 16, 2010 in U.S. Appl. No. 11/873,407.|
|6||Amendment dated Feb. 20, 2011 in U.S. Appl. No. 11/977,439, 9 pages.|
|7||Amendment dated Feb. 22, 2011 in U.S. Appl. No. 11/874,896, 19 pages.|
|8||Amendment dated Jan. 13, 2010 in U.S. Appl. No. 11/977,437.|
|9||Amendment dated Jan. 20, 2009 in U.S. Appl. No. 11/601,415.|
|10||Amendment dated Jul. 13, 2010 in U.S. Appl. No. 11/977,439.|
|11||Amendment dated Jul. 27, 2009 in U.S. Appl. No. 11/601,415.|
|12||Amendment dated Mar. 10, 2010 in U.S. Appl. No. 11/874,197.|
|13||Amendment dated Mar. 29, 2010 in U.S. Appl. No. 11/601,415.|
|14||Amendment dated May 23, 2011 in U.S. Appl. No. 11/874,197, 15 pages.|
|15||Amendment dated Nov. 1, 2010 in U.S. Appl. No. 11/601,415; 12 pages.|
|16||Applicant's Interview Summary dated May 23, 2011 in U.S. Appl. No. 11/874,896, 2 pages.|
|17||Arasu A. "CQL: A Language for Continuous Queries over Streams and Relations", 2004, Lecture Notes in Computer Science, vol. 2921/2004, pp. 1-19.|
|18||Arasu, A. et al. "An Abstract Semantics and Concrete Language for Continuous Queries over Streams and Relations", 9th International Workshop on Database programming languages, Sep. 2003, pp. 12.|
|19||Arasu, A. et al. "Stream: The Stanford Data Stream Management System", Department of Computer Science, Stanford University, 2004, pp. 21.|
|20||Arasu, A. et al. "The CQL Continuous Query Language: Semantic Foundation and Query Execution", VLDB Journal, vol. 15, Issue 2, Jun. 2006, pp. 32.|
|21||Avnur, R. et al. "Eddies: Continuously Adaptive Query Processing", In Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data, Dallas, TX, May 2000, pp. 12.|
|22||Avnur, R. et al. "Eddies: Continuously Adaptive Query Processing", slide show, believed to be prior to Oct. 17, 2007, pp. 4.|
|23||Babu, S. et al. "Continuous Queries over Data Streams", SIGMOD Record, Sep. 2001, pp. 12.|
|24||Bose, S. et al., "A Query Algebra for Fragmented XML Stream Data", 9th International Workshop on Data Base Programming Languages (DBPL), Sep. 2003, Postdam, Germany, http://lambda.uta.edu/dbpl03.pdf, pp. 11.|
|25||Buza, A. "Extension of CQL over Dynamic Databases", Journal of Universal Computer Science, vol. 12, No. 9, 2006, pp. 12.|
|26||Chandrasekaran, S. et al. "TelegraphCQ: Continuous Dataflow Processing for an Uncertain World", Proceedings of CIDR 2003, pp. 12.|
|27||Chen, J. et al. "NiagaraCQ: A Scalable Continuous Query System for Internet Databases", Proceedings of 2000 ACM SIGMOD, pp. 12.|
|28||Deshpande, A. et al. "Adaptive Query Processing", believed to be prior to Oct. 17, 2007, pp. 27.|
|29||Diao, Y. "Query Processing for Large-Scale XML Message Brokering", 2005, University of California Berkeley, pp. 226.|
|30||Diao, Y. et al. "Query Processing for High-Volume XML Message Brokering", Proceedings of the 29th VLDB Conference, Berlin, Germany, 2003, pp. 12.|
|31||Entire Prosecution History of U.S. Appl. No. 11/873,407, filed on Oct. 16, 2007 by Namit Jain et al.|
|32||Entire Prosecution History of U.S. Appl. No. 11/874,197 filed on Oct. 17, 2007 by Namit Jain et al.|
|33||Entire Prosecution History of U.S. Appl. No. 11/874,197, filed on Oct. 17, 2007 by Namit Jain et al.|
|34||Entire Prosecution History of U.S. Appl. No. 11/874,850, filed on Oct. 18, 2007 by Namit Jain et al.|
|35||Entire Prosecution History of U.S. Appl. No. 11/874,896, filed on Oct. 18, 2007 by Anand Srinivasan et al.|
|36||Entire Prosecution History of U.S. Appl. No. 11/977,437, filed on Oct. 20, 2007 by Anand Srinivasan et al.|
|37||Entire Prosecution History of U.S. Appl. No. 11/977,439, filed on Oct. 20, 2007 by Anand Srinivasan et al.|
|38||Entire Prosecution History of U.S. Appl. No. 11/977,440, filed on Oct. 20, 2007 by Anand Srinivasan et al.|
|39||Entire Prosecution History of U.S. Appl. No. 60/942,131, filed on Jun. 5, 2007 by Shailendra Mishra et al.|
|40||Examiner Interview Summary dated Aug. 30, 2010 in U.S. Appl. No. 11/873,407; 3 pages.|
|41||Examiner Interview Summary dated Dec. 1, 2009 in U.S. Appl. No. 11/977,440; 3 pages.|
|42||Examiner Interview Summary dated Nov. 10, 2010 in U.S. Appl. No. 11/873,407; 2 pages.|
|43||Examiner Interview Summary dated Nov. 16, 2010 in U.S. Appl. No. 11/874,197; 4 pages.|
|44||Examiner Interview Summary dated Nov. 18, 2009 in U.S. Appl. No. 11/874,850; 3 pages.|
|45||Examiner Interview Summary dated Oct. 12, 2010 in U.S. Appl. No. 11/601,415; 3 pages.|
|46||Examiner Interview Summary dated Oct. 25, 2010 in U.S. Appl. No. 11/874,896; 3 pages.|
|47||Examiner Interview Summary, dated Aug. 17, 2010 in U.S. Appl. No. 11/977,437; 3 pages.|
|48||Examiner's Interview Summary dated May 23, 2011 in U.S. Appl. No. 11/874,896, 12 pages.|
|49||Fernandez, Mary et al., "Build your own XQuery processor", http://edbtss04,dia.uniroma3.it/Simeon.pdf, pp. 116.|
|50||Fernandez, Mary et al., Implementing XQuery 1.0: The Galax Experience:, Proceedings of the 29th VLDB Conference, Berlin, Germany, 2003, pp. 4.|
|51||Final Office Action dated Apr. 26, 2010 in U.S. Appl. No. 11/873,407.|
|52||Final Office Action dated Apr. 8, 2010 in U.S. Appl. No. 11/977,437.|
|53||Final Office Action dated Jul. 23, 2010 in U.S. Appl. No. 11/874,896.|
|54||Final Office Action dated Jun. 29, 2010 in U.S. Appl. No. 11/874,197.|
|55||Final Office Action dated Jun. 30, 2010 in U.S. Appl. No. 11/601,415.|
|56||Final Office Action dated May 27, 2009 in U.S. Appl. No. 11/601,415.|
|57||Florescu, Daniela et al., "The BEA/XQRL Streaming XQuery Processor", Proceedings of the 29th VLDB Conference, 2003, Berlin, Germany, pp. 12.|
|58||Gilani, A. Design and implementation of stream operators, query instantiator and stream buffer manager, Dec. 2003, pp. 138.|
|59||Interview Summary dated Nov. 16, 2010 in U.S. Appl. No. 11/874,197.|
|60||Jin, C. et al. "ARGUS: Efficient Scalable Continuous Query Optimization for Large-Volume Data Streams", 10th International Database Engineering and Applications Symposium (IDEAS'06), 2006, pp. 7.|
|61||Madden, S. et al. "Continuously Adaptive Continuous Queries (CACQ) over Streams", SIGMOD, 2002, pp. 6.|
|62||Motwani, R. et al. "Models and Issues in Data Stream Systems", Proceedings of the 21st ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, 2002, pp. 26.|
|63||Munagala, K. et al. "Optimization of Continuous Queries with Shared Expensive Filters", Proceedings of the 26th ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, believed to be prior to Oct. 17, 2007, pp. 14.|
|64||Notice of Allowance dated Aug. 18, 2010 in U.S. Appl. No. 11/977,439.|
|65||Notice of Allowance dated Jun. 23, 2011 in U.S. Appl. No. 11/874,896, 33 pages.|
|66||Notice of Allowance dated Mar. 16, 2011 in U.S. Appl. No. 11/977,439, 10 pages.|
|67||Notice of Allowance dated Mar. 7, 2011 in U.S. Appl. No. 11/873,407, 8 pages.|
|68||Notice of Allowance dated Nov. 10, 2010 in U.S. Appl. No. 11/873,407.|
|69||Notice of Allowance dated Nov. 24, 2009 in U.S. Appl. No. 11/874,850.|
|70||Notice of Allowance dated Nov. 24, 2010 in U.S. Appl. No. 11/977,439.|
|71||Notice of Allowance dated Nov. 24, 2010 in U.S. Appl. No. 11/977,439; 8 pages.|
|72||Notice of Allowance dated Oct. 7, 2009 in U.S. Appl. No. 11/977,440.|
|73||Office Action dated Apr. 13, 2010 in U.S. Appl. No. 11/977,439.|
|74||Office Action dated Dec. 22, 2010 in U.S. Appl. No. 11/874,197, 22 pages.|
|75||Office Action dated Dec. 8, 2009 in U.S. Appl. No. 11/874,896.|
|76||Office Action dated Nov. 10, 2009 in U.S. Appl. No. 11/874,197.|
|77||Office Action dated Nov. 13, 2009 in U.S. Appl. No. 11/873,407.|
|78||Office Action dated Nov. 22, 2010 in U.S. Appl. No. 11/874,896.|
|79||Office Action dated Nov. 22, 2010 in U.S. Appl. No. 11/874,896; 25 pages.|
|80||Office Action dated Nov. 30, 2009 in U.S. Appl. No. 11/601,415.|
|81||Office Action dated Oct. 13, 2009 in U.S. Appl. No. 11/977,437.|
|82||Office Action dated Sep. 17, 2008 in U.S. Appl. No. 11/601,415.|
|83||Oracle Application Server 10 g Release 2 and 3, New Features Overview, An Oracle White Paper, Oct. 2005, pp. 48.|
|84||Oracle Database, SQL Language Reference, 11 g Release 1 (11.1), B28286-02, Sep. 2007, pp. 144.|
|85||Preliminary Amendment dated Oct. 14, 2009 in U.S. Appl. No. 11/874,197.|
|86||Preliminary Amendment dated Oct. 15, 2009 in U.S. Appl. No. 11/874,850.|
|87||Preliminary Amendment dated Oct. 15, 2009 in U.S. Appl. No. 11/977,439.|
|88||Preliminary Amendment dated Oct. 16, 2009 in U.S. Appl. No. 11/873,407, 5 pages.|
|89||Preliminary Amendment dated Oct. 16, 2009 in U.S. Appl. No. 11/874,896.|
|90||Preliminary Amendment dated Oct. 16, 2010 in U.S. Appl. No. 11/873,407.|
|91||Request for Continued Examination and Amendment dated Aug. 27, 2009 in U.S. Appl. No. 11/601,415.|
|92||Request for Continued Examination and Amendment dated Nov. 1, 2010 in U.S. Appl. No. 11/601,415.|
|93||Request for Continued Examination and Amendment dated Oct. 25, 2010 in U.S. Appl. No. 11/874,896.|
|94||Request for Continued Examination and Amendment dated Oct. 29, 2010 in U.S. Appl. No. 11/874,197.|
|95||Request for Continued Examination and Amendment dated Sep. 8, 2010 in U.S. Appl. No. 11/977,437.|
|96||Request for Continued Examination dated Aug. 26, 2010 in U.S. Appl. No. 11/873,407.|
|97||Response to Amendment dated Jan. 7, 2010 in U.S. Appl. No. 11/977,440.|
|98||Second Preliminary Amendment dated Oct. 14, 2009 in U.S. Appl. No. 11/874,197, 3 pages.|
|99||*||Sharaf et al. "Efficient Scheduling of Heterogeneous Continous Queries" , VLDB '06, Sep. 12-15, 2006, pp. 511-522.|
|100||*||Sliding Window Query Processing over data streams by Lukasz Golab: University of Waterloo, Waterloo, Ont. Canada, Aug. 2006.|
|101||Stream Query Repository: Online Auctions (CQL Queries), http://www-db.stanford.edu/stream/sqr/cq1/onauc.html , Dec. 2, 2002, pp. 3.|
|102||Stream Query Repository: Online Auctions, http://www-db.stanford.edu/stream/sqr/onauc.html#queryspecsend , Dec. 2, 2002, pp. 2.|
|103||Supplemental Notice of Allowance dated Dec. 11, 2009 in U.S. Appl. No. 11/874,850.|
|104||Supplemental Notice of Allowance dated Jan. 27, 2010 in U.S. Appl. No. 11/874,850.|
|105||Terminal Disclaimer dated Jul. 13, 2010 filed in U.S. Appl. No. 11/977,439 over U.S. Appl. No. 11/874,896; 2 pages.|
|106||Terminal Disclaimer dated Jul. 13, 2010 filed in U.S. Appl. No. 11/977,439 over U.S. Appl. No. 11/977,437; 2 pages.|
|107||Terminal Disclaimer dated Jul. 13, 2010 filed in U.S. Appl. No. 11/977,439 over US Patent 7,673,065; 2 pages.|
|108||Terminal Disclaimer dated May 23, 2011 in U.S. Appl. No. 11/874,896 over U.S. Appl. No. 11/874,197, 3 pages.|
|109||Terminal Disclaimer dated May 23, 2011 in U.S. Appl. No. 11/874,896 over U.S. Appl. No. 11/874,202, 3 pages.|
|110||Terminal Disclaimer dated May 23, 2011 in U.S. Appl. No. 11/874,896 over U.S. Appl. No. 11/977,437, 3 pages.|
|111||Terminal Disclaimer dated May 23, 2011 in U.S. Appl. No. 11/874,896 over U.S. Appl. No. 11/977,439, 3 pages.|
|112||Terminal Disclaimer dated May 23, 2011 in U.S. Appl. No. 11/874,896 over U.S. Patent 7,673,065, 3 pages.|
|113||Terminal Disclaimer Review Decision dated Jun. 2, 2011 in U.S. Appl. No. 11/874,896, 2 pages.|
|114||Terry, D.B. et al. "Continuous queries over append-only databases", Proceedings of 1992 ACM SIGMOD, pp. 321-330.|
|115||Widom, J. et al. "CQL: A Language for Continuous Queries over Streams and Relations", believed to be prior to Oct. 17, 2007, pp. 31.|
|116||Widom, J. et al. "The Stanford Data Stream Management System", believed to be prior to Oct. 17, 2007, pp. 24.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US8145859||Mar 2, 2009||Mar 27, 2012||Oracle International Corporation||Method and system for spilling from a queue to a persistent store|
|US8296331 *||Jan 26, 2010||Oct 23, 2012||Microsoft Corporation||Implementation of stream algebra over class instances|
|US8321450||Jul 21, 2009||Nov 27, 2012||Oracle International Corporation||Standardized database connectivity support for an event processing server in an embedded context|
|US8386466||Aug 3, 2009||Feb 26, 2013||Oracle International Corporation||Log visualization tool for a data stream processing server|
|US8387076||Jul 21, 2009||Feb 26, 2013||Oracle International Corporation||Standardized database connectivity support for an event processing server|
|US8402015 *||Aug 18, 2009||Mar 19, 2013||Hitachi, Ltd.||Method for processing stream data and system thereof|
|US8447744||Nov 30, 2010||May 21, 2013||Oracle International Corporation||Extensibility platform using data cartridges|
|US8498956||Aug 26, 2009||Jul 30, 2013||Oracle International Corporation||Techniques for matching a certain class of regular expression-based patterns in data streams|
|US8527458 *||Aug 3, 2009||Sep 3, 2013||Oracle International Corporation||Logging framework for a data stream processing server|
|US8589436||Aug 26, 2009||Nov 19, 2013||Oracle International Corporation||Techniques for performing regular expression-based pattern matching in data streams|
|US8676841||Aug 26, 2009||Mar 18, 2014||Oracle International Corporation||Detection of recurring non-occurrences of events using pattern matching|
|US8713049||Jul 28, 2011||Apr 29, 2014||Oracle International Corporation||Support for a parameterized query/view in complex event processing|
|US8788481||Feb 27, 2013||Jul 22, 2014||Hitachi, Ltd.||Method for processing stream data and system thereof|
|US8797178 *||Mar 10, 2008||Aug 5, 2014||Microsoft Corporation||Efficient stream sharing for multi-user sensor data collection|
|US8959106||Apr 19, 2011||Feb 17, 2015||Oracle International Corporation||Class loading using java data cartridges|
|US8990416||May 6, 2011||Mar 24, 2015||Oracle International Corporation||Support for a new insert stream (ISTREAM) operation in complex event processing (CEP)|
|US9020969 *||Jul 13, 2011||Apr 28, 2015||Sap Se||Tracking queries and retrieved results|
|US9047249||Feb 19, 2013||Jun 2, 2015||Oracle International Corporation||Handling faults in a continuous event processing (CEP) system|
|US9058360||Nov 30, 2010||Jun 16, 2015||Oracle International Corporation||Extensible language framework using data cartridges|
|US9098587||Mar 15, 2013||Aug 4, 2015||Oracle International Corporation||Variable duration non-event pattern matching|
|US9110945||Nov 12, 2013||Aug 18, 2015||Oracle International Corporation||Support for a parameterized query/view in complex event processing|
|US20090224941 *||Mar 10, 2008||Sep 10, 2009||Microsoft Corporation||Efficient stream sharing for multi-user sensor data collection|
|US20100030896 *||Feb 4, 2010||Microsoft Corporation||Estimating latencies for query optimization in distributed stream processing|
|US20100106946 *||Aug 18, 2009||Apr 29, 2010||Hitachi, Ltd.||Method for processing stream data and system thereof|
|US20100131543 *||Jan 26, 2010||May 27, 2010||Microsoft Corporation||Implementation of stream algebra over class instances|
|US20110029484 *||Aug 3, 2009||Feb 3, 2011||Oracle International Corporation||Logging framework for a data stream processing server|
|U.S. Classification||707/718, 707/713|
|Feb 10, 2008||AS||Assignment|
Owner name: ORACLE INTERNATIONAL CORPORATION, CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JAIN, NAMIT;SRINIVASAN, ANAND;MISHRA, SHAILENDRA KUMAR;REEL/FRAME:020486/0025
Effective date: 20071119
|Jan 21, 2015||FPAY||Fee payment|
Year of fee payment: 4