Publication number | US7574333 B2 |

Publication type | Grant |

Application number | US 10/772,971 |

Publication date | Aug 11, 2009 |

Filing date | Feb 5, 2004 |

Priority date | Feb 5, 2004 |

Fee status | Paid |

Also published as | DE602005023270D1, EP1711867A1, EP1711867B1, US20050177348, WO2005078539A1 |

Publication number | 10772971, 772971, US 7574333 B2, US 7574333B2, US-B2-7574333, US7574333 B2, US7574333B2 |

Inventors | Joseph Z. Lu |

Original Assignee | Honeywell International Inc. |

Export Citation | BiBTeX, EndNote, RefMan |

Patent Citations (22), Non-Patent Citations (17), Referenced by (17), Classifications (24), Legal Events (3) | |

External Links: USPTO, USPTO Assignment, Espacenet | |

US 7574333 B2

Abstract

A projection is associated with a first signal and a second signal. The second signal includes a first portion associated with the first signal and a second portion not associated with the first signal. The projection at least substantially separates the first portion of the second signal from the second portion of the second signal. One or more parameters of a model are identified using at least a portion of the projection. The model associates the first signal and the first portion of the second signal.

Claims(23)

1. A method, comprising:

electronically receiving a projection associated with a first signal and a second signal, the first and second signals associated with a control system, the second signal comprising a first portion associated with the first signal and a second portion not associated with the first signal, the projection comprising an upper triangular matrix, the projection at least partially isolating the first portion of the second signal from the second portion of the second signal;

electronically identifying model parameters using at least a portion of the projection; and

electronically generating and storing a model associated with the model parameters, the model associating the first signal and the first portion of the second signal;

wherein identifying the model parameters comprises:

identifying one or more pole candidates using one or more first defined areas in the upper triangular matrix, the model parameters comprising at least one of the one or more pole candidates; and

identifying one or more model candidates using one or more second defined areas in the upper triangular matrix, the model parameters comprising at least one of the one or more model candidates; and

wherein each of the one or more second defined areas represents a backward column Hankel matrix centered along one of multiple diagonals of the upper triangular matrix, and wherein identifying the one or more model candidates comprises rewriting each backward column Hankel matrix as a forward column Hankel matrix.

2. The method of claim 1 , wherein identifying the model parameters further comprises:

selecting at least one of the one or more pole candidates and selecting at least one of the one or more model candidates as the model parameters.

3. The method of claim 1 , wherein:

the upper triangular matrix has a plurality of values along one of the diagonals of the upper triangular matrix, each value being greater than or equal to zero.

4. The method of claim 1 , wherein:

the diagonals divide the upper triangular matrix into upper, lower, left, and right sections; and

the one or more first defined areas in the upper triangular matrix are located in the right section of the upper triangular matrix.

5. The method of claim 1 , wherein:

identifying the model parameters comprises identifying one or more model parameters for each of multiple first defined areas in the upper triangular matrix.

6. The method of claim 5 , wherein:

the one or more model parameters associated with different first defined areas in the upper triangular matrix are different; and

identifying the model parameters further comprises selecting the one or more model parameters associated with a specific one of the first defined areas in the upper triangular matrix.

7. The method of claim 1 , wherein the projection at least partially isolates the first portion of the second signal from the second portion of the second signal in an orthogonal space.

8. The method of claim 1 , wherein:

a first of the diagonals extends from an upper left corner to a lower right corner of the upper triangular matrix; and

a second of the diagonals extends from a lower left corner to an upper right corner of the upper triangular matrix.

9. The method of claim 1 , further comprising:

controlling at least a portion of a process using the model.

10. A method, comprising:

electronically receiving a projection associated with a first signal and a second signal, the first and second signals associated with a control system, the second signal comprising a first portion associated with the first signal and a second portion not associated with the first signal, the projection comprising a first upper triangular matrix, having two diagonals that divide the upper triangular matrix into four sections, a first of the diagonals starting at an upper left corner of the upper triangular matrix and traveling down and right in the upper triangular matrix, a second of the diagonals starting at a lower left corner of the upper triangular matrix and traveling up and right in the upper triangular matrix, the projection at least partially isolating the first portion of the second signal from the second portion of the second signal;

electronically identifying one or more model parameters using at least a portion of the projection; and

electronically generating and storing a model associated with the one or more model parameters, the model associating the first signal and the first portion of the second signal;

wherein identifying the one or more model parameters comprises:

identifying one or more model parameters for each of multiple defined areas in the first upper triangular matrix, the defined areas located in a single one of the sections of the upper triangular matrix;

selecting the one or more model parameters associated with a specific one of the defined areas in the first upper triangular matrix; and

wherein selecting the one or more model parameters associated with the specific one of the defined areas in the first upper triangular matrix comprises:

for each defined area in the first upper triangular matrix, generating a matrix comprising a forward column Hankel matrix based on a prediction error, the prediction error associated with the one or more model parameters that are associated with that defined area;

for each generated matrix, performing canonical QR-decomposition on the matrix to form a second upper triangular matrix, each second upper triangular matrix having an upper right portion denoted R_{E3};

for each second upper triangular matrix, identifying a value for ∥R_{E3}∥_{2} ^{2}; and

selecting the one or more model parameters associated with the defined area having the second upper triangular matrix with a smallest value for ∥R_{E3}∥_{2} ^{2}.

11. An apparatus, comprising:

at least one input receiving a first signal and a second signal associated with a control system, the second signal comprising a first portion associated with the first signal and a second portion not associated with the first signal; and

at least one processor:

generating a projection associated with the first and second signals and identifying model parameters using at least a portion of the projection, the projection comprising an upper triangular matrix having two diagonals that divide the upper triangular matrix into four sections, a first of the diagonals starting at an upper left corner of the upper triangular matrix and traveling down and right in the upper triangular matrix, a second of the diagonals starting at a lower left corner of the upper triangular matrix and traveling up and right in the upper triangular matrix, the projection at least partially isolating the first portion of the second signal from the second portion of the second signal; and

generating and storing a model associated with the model parameters, the model associating the first signal and the first portion of the second signal;

wherein the at least one processor identifies the model parameters by:

identifying one or more pole candidates using one or more first defined areas in the upper triangular matrix, the model parameters comprising at least one of the one or more pole candidates, the one or more first defined areas located in a single one of the sections of the upper triangular matrix; and

identifying one or more model candidates using one or more second defined areas in the upper triangular matrix, the model parameters comprising at least one of the one or more model candidates; and

wherein each of the one or more second defined areas represents a matrix centered along one of the diagonals of the upper triangular matrix.

12. The apparatus of claim 11 , wherein the at least one processor identifies the model parameters by:

selecting at least one of the one or more pole candidates and selecting at least one of the one or more model candidates as the model parameters.

13. The apparatus of claim 11 , wherein:

the upper triangular matrix has a plurality of values along one of the diagonals of the upper triangular matrix, each value being greater than or equal to zero.

14. The apparatus of claim 11 , wherein:

the at least one processor identifies the model parameters by identifying one or more model parameters for each of multiple first defined areas in the upper triangular matrix.

15. The apparatus of claim 11 , wherein the at least one processor uses the model parameters associated with the stored model to de-noise the second signal.

16. The apparatus of claim 11 , wherein:

each matrix centered along one of the diagonals of the upper triangular matrix comprises a backward column Hankel matrix; and

the at least one processor identifies the one or more model candidates by rewriting each backward column Hankel matrix as a forward column Hankel matrix.

17. An apparatus, comprising:

at least one input receiving a first signal and a second signal associated with a control system, the second signal comprising a first portion associated with the first signal and a second portion not associated with the first signal; and

at least one processor:

generating a projection associated with the first and second signals and identifying one or more model parameters using at least a portion of the projection, the projection comprising a first upper triangular matrix, having two diagonals that divide the upper triangular matrix into four sections, a first of the diagonals starting at an upper left corner of the upper triangular matrix and traveling down and right in the upper triangular matrix, a second of the diagonals starting at a lower left corner of the upper triangular matrix and traveling up and right in the upper triangular matrix, the projection at least partially isolating the first portion of the second signal from the second portion of the second signal; and

generating and storing a model associated with the one or more model parameters, the model associating the first signal and the first portion of the second signal;

wherein the at least one processor identifies the one or more model parameters by:

identifying one or more model parameters for each of multiple defined areas in the first upper triangular matrix, the defined areas located in a single one of the sections of the upper triangular matrix; and

selecting the one or more model parameters associated with a specific one of the defined areas in the first upper triangular matrix; and

wherein the at least one processor selects the one or more model parameters associated with the specific one of the defined areas in the first upper triangular matrix by:

for each defined area in the first upper triangular matrix, generating a matrix comprising a forward column Hankel matrix based on a prediction error, the prediction error associated with the one or more model parameters that are associated with that defined area;

for each generated matrix, performing canonical QR-decomposition on the matrix to form a second upper triangular matrix, each second upper triangular matrix having an upper right portion denoted R_{E3};

for each second upper triangular matrix, identifying a value for ∥R_{E3}∥_{2} ^{2}; and

selecting the one or more model parameters associated with the defined area having the second upper triangular matrix with a smallest value for ∥R_{E3}∥_{2} ^{2}.

18. A computer readable medium embodying a computer program, the computer program comprising:

computer readable program code that receives a projection associated with a first signal and a second signal, the first and second signals associated with a control system the second signal comprising a first portion associated with the first signal and a second portion associated with at least one disturbance, the projection comprising an upper triangular matrix, the projection at least partially isolating the first portion of the second signal from the second portion of the second signal;

computer readable program code that identifies model parameters using at least a portion of the projection; and

computer readable program code that generates and stores a model associated with the model parameters, the model associating the first signal and the first portion of the second signal;

wherein the computer readable program code that identifies the model parameters comprises:

computer readable program code that identifies one or more pole candidates using one or more first defined areas in the upper triangular matrix, the model parameters comprising at least one of the one or more pole candidates; and

computer readable program code that identifies one or more model candidates using one or more second defined areas in the upper triangular matrix, the model parameters comprising at least one of the one or more model candidates; and

wherein each of the one or more second defined areas represents a backward column Hankel matrix centered along one of multiple diagonals of the upper triangular matrix, and wherein the computer readable program code that identifies the one or more model candidates comprises computer readable program code that rewrites each backward column Hankel matrix as a forward column Hankel matrix.

19. The computer readable medium of claim 18 , wherein the computer readable program code that identifies the model parameters comprises:

computer readable program code that selects at least one of the one or more pole candidates and that selects at least one of the one or more model candidates as the model parameters.

20. The computer readable medium of claim 18 , wherein:

the upper triangular matrix has a plurality of values along one of the diagonals of the upper triangular matrix, each value being greater than or equal to zero.

21. The computer readable medium of claim 18 , wherein:

the computer readable program code that identifies the one or more model parameters comprises computer readable program code that identifies one or more model parameters for each of multiple first defined areas in the upper triangular matrix.

22. The computer readable medium of claim 21 , wherein:

the one or more model parameters associated with different first defined areas in the upper triangular matrix are different; and

the computer readable program code that identifies the model parameters further comprises computer readable program code that selects the one or more model parameters associated with a specific one of the first defined areas in the upper triangular matrix.

23. A computer program readable medium embodying a computer program, the computer program comprising:

computer readable program code that receives a projection associated with a first signal and a second signal, the first and second signals associated with a control system the second signal comprising a first portion associated with the first signal and a second portion associated with at least one disturbance, the projection comprising a first upper triangular matrix, having two diagonals that divide the upper triangular matrix into four sections, a first of the diagonals starting at an upper left corner of the upper triangular matrix and traveling down and right in the upper triangular matrix, a second of the diagonals starting at a lower left corner of the upper triangular matrix and traveling up and right in the upper triangular matrix, the projection at least partially isolating the first portion of the second signal from the second portion of the second signal;

computer readable program code that identifies one or more model parameters using at least a portion of the projection; and

computer readable program code that generates and stores a model associated with the one or more model parameters, the model associating the first signal and the first portion of the second signal;

wherein the computer readable program code that identifies the one or more model parameters comprises:

computer readable program code that identifies one or more model parameters for each of multiple defined areas in the first upper triangular matrix, the defined areas located in a single one of the sections of the upper triangular matrix; and

computer readable program code that selects the one or more model parameters associated with a specific one of the defined areas in the first upper triangular matrix; and

wherein the computer readable program code that selects the one or more model parameters associated with the specific one of the defined areas comprises:

computer readable program code that, for each defined area in the first upper triangular matrix, generates a matrix comprising a forward column Hankel matrix based on a prediction error, the prediction error associated with the one or more model parameters that are associated with that defined area;

computer readable program code that, for each generated matrix, performs canonical QR-decomposition on the matrix to form a second upper triangular matrix, each second upper triangular matrix having an upper right portion denoted R_{E3};

computer readable program code that, for each second upper triangular matrix, identifies a value for ∥R_{E3}∥_{2} ^{2}; and

computer readable program code that selects the one or more model parameters associated with the defined area having the second upper triangular matrix with a smallest value for ∥R_{E3}∥_{2} ^{2}.

Description

This patent application is related to U.S. patent application Ser. No. 10/773,017 entitled “APPARATUS AND METHOD FOR ISOLATING NOISE EFFECTS IN A SIGNAL” filed on Feb. 5, 2004, which is incorporated by reference.

This disclosure relates generally to model identification systems and more specifically to an apparatus and method for modeling relationships between signals.

Process control systems are often used to control the operation of a system. For example, a process control system may be used to control the operation of a processing facility. As a particular example, a process control system could manage the use of valves in a processing facility, where the valves control the flow of materials in the facility. Example processing facilities include manufacturing plants, chemical plants, crude oil refineries, and ore processing plants.

Conventional process control systems often use models to predict the behavior of a system being monitored. However, it is often difficult to identify the models used by the process control systems. For example, the conventional process control systems often process signals that suffer from noise or other disturbances. The presence of noise in the signals often makes it difficult for a process control system to identify a relationship between two or more signals. As a result, this often makes it more difficult to monitor and control a system.

This disclosure provides an apparatus and method for modeling relationships between signals.

In one aspect, a method includes receiving a projection associated with a first signal and a second signal. The second signal includes a first portion associated with the first signal and a second portion not associated with the first signal. The projection at least substantially separates the first portion of the second signal from the second portion of the second signal. The method also includes identifying one or more parameters of a model using at least a portion of the projection. The model associates the first signal and the first portion of the second signal.

In another aspect, an apparatus includes at least one input operable to receive a first signal and a second signal. The second signal includes a first portion associated with the first signal and a second portion not associated with the first signal. The apparatus also includes at least one processor operable to generate a projection associated with the first and second signals and to identify one or more parameters of a model associating the first signal and the first portion of the second signal. The projection at least substantially separates the first portion of the second signal from the second portion of the second signal.

In yet another aspect, a computer program is embodied on a computer readable medium and is operable to be executed by a processor. The computer program includes computer readable program code for generating a projection associated with a first signal and a second signal. The second signal includes a first portion associated with the first signal and a second portion associated with at least one disturbance. The projection at least substantially separates the first portion of the second signal from the second portion of the second signal. The computer program also includes computer readable program code for identifying one or more parameters of a model associating the first signal and the first portion of the second signal using at least a portion of the projection.

Other technical features may be readily apparent to one skilled in the art from the following figures, descriptions, and claims.

For a more complete understanding of this disclosure, reference is now made to the following description, taken in conjunction with the accompanying drawings, in which:

**100** for isolating noise effects in a signal according to one embodiment of this disclosure. The system **100** shown in **100** may be used without departing from the scope of this disclosure.

In this example embodiment, the system **100** includes a monitored system **102**. The monitored system **102** represents any suitable system for producing or otherwise receiving an input signal **104** and producing or otherwise providing an ideal output signal **106**. In some embodiments, the monitored system **102** is represented by a process model G(s), which represents the transformation of the input signal **104** into the output signal **106**.

The monitored system **102** may represent any type of system. The monitored system **102** could, for example, represent a manufacturing or other processing system or a communication system. As a particular example, the monitored system **102** could represent a manufacturing plant having various valves that are controlled based on the input signal **104** and/or the ideal output signal **106**. The monitored system **102** could also represent a communication system where the input signal **104** represents a signal transmitted by a mobile telephone and the ideal output signal **106** represents the ideal signal to be received by a base station.

As shown in **106** is often corrupted by some type of noise or other disturbance **108**. This leads to the creation of an actual output signal **110**. The actual output signal **110** includes a first portion associated with the input signal **104** and a second portion associated with the noise **108**. These two portions often overlap, making it difficult to separate them. The noise or other disturbance **108** could represent any suitable disturbance to the ideal output signal **106**, such as white noise or colored noise. As a particular example, the monitored system **102** could represent a production system, and the noise **108** could represent white noise introduced into an ideal output signal **106** before the signal **106** is received by a valve controller.

In the example in **100** includes a controller **112**, which has access to the input signal **104** and the actual output signal **110**. The controller **112** uses the input signal **104** and the actual output signal **110** to control the operation of the monitored system **102**. For example, the controller **112** could represent a valve controller capable of controlling the opening and closing of valves in the monitored system **102**. As another example, the controller **112** could represent a signal controller capable of analyzing the input signal **104** and the actual output signal **110** and adjusting one or more parameters used to transmit data in the system **100**. The controller **112** includes any hardware, software, firmware, or combination thereof for controlling one or more aspects of operation of the system **100**. As a particular example, the controller **112** could include one or more processors **114** and one or more memories **116** capable of storing data and instructions used by the processors. In this example, the controller **112** receives the input signal **104** through a first input **118** and the actual output signal **110** through a second input **120**.

As shown in **112** only has access to an output signal **110** that has been altered because of noise or other disturbances **108**. Conventional systems attempt to remove noise or other disturbances **108** from a signal **110** using low-pass filtering. Low-pass filters often cannot eliminate much of the noise or other disturbances **108** from a signal **110** without impeding the performance of the system **100**.

To facilitate more accurate control over the monitored system **102**, the controller **112** generates at least one matrix associated with the input signal **104** and the actual output signal **110**. The controller **112** then generates a projection of the matrix using “canonical QR-decomposition.” This projects the matrix into orthogonal space, where the projection at least partially separates the input signal **104**, the portion of the actual output signal **110** corresponding to the input signal **104**, and the portion of the actual output signal **110** corresponding to the noise or other disturbances **108**. In this way, the controller **112** at least partially separates the effects of the input signal **104** in the output signal **110** from the effects of the noise **108** in the output signal **110**. As a result, the controller **112** is able to more effectively isolate the effects of noise **108** in the actual output signal **110**.

QR-decomposition refers to a matrix decomposition performed according to the following equation:

A=QR

where A represents a matrix being decomposed, Q represents an orthogonal matrix, and R represents an upper triangular matrix.

A problem with conventional QR-decomposition is that a given matrix A could be decomposed in different ways. For example, a given matrix A could be decomposed into [Q_{1 }R_{1}], [Q_{2 }R_{2}], or [Q_{3 }R_{3}]. This creates problems in isolating noise **108** in the actual output signal **110** because it means that the same matrix representing the same input signals **104** and actual output signals **110** could have different QR-decompositions.

Canonical QR-decomposition or “CQR decomposition” represents a unique QR-decomposition where the diagonal values in the triangular matrix R are greater than or equal to zero. The “diagonal values” in the matrix R represent the values along the diagonal between the upper left corner and the lower right corner of the matrix R. By preventing the diagonal values in the upper triangular matrix R from being less than zero, each matrix A can be uniquely decomposed. This helps to facilitate the separation of noise effects contained in the actual output signal **110**. In some embodiments, software routines are used to decompose a matrix using canonical QR-decomposition. Example software to decompose a matrix using canonical QR-decomposition is shown in the Software Appendix.

Although **100** for isolating noise effects in a signal, various changes may be made to **112** could be implemented in any hardware, software, firmware, or combination thereof. Also, the functionality of the controller **112** could be used in any other apparatus, system, or environment. As particular examples, the functionality of the controller **112** could also be implemented in a monitor, modeling tool, evaluator, detector, adapter, or any other device or system.

**100** of **100** of

**104** received by the monitored system **102** in **104**. As shown in **104** may vary widely over a small number of samples and over a longer period of time.

**106** produced or otherwise provided by the monitored system **102** in **106**. As shown in **106** varies but not as rapidly or widely as the input signal **104**. Also, the ideal output signal **106** does not appear to include random peaks or valleys, which often indicate the presence of noise.

**110** produced or otherwise provided by the monitored system **102** in **110**. As shown in **110** includes random peaks and valleys, indicating that the actual output signal **110** has been corrupted by noise or other disturbances **108**.

The controller **112** or other monitor in the system **100** of **104** and the actual output signal **110**. The controller **112** or other monitor generally lacks access to the ideal output signal **106**. As shown in **106** from the actual output signal **110**. For example, running the actual output signal **110** through a low-pass filter could remove much, but not all, of the noise and also remove some of the ideal output signal **106**.

As described above, the controller **112** separates the effects of noise **108** from the effects of the input signal **104** in the output signal **110**. In particular, the controller **112** generates a matrix and performs canonical QR-decomposition to project the matrix into orthogonal space, where the input signal **104**, the portion of the actual output signal **110** corresponding to the input signal **104**, and the portion of the actual output signal **110** corresponding to the noise **108** are at least partially separated. In this way, the controller **112** or other monitor can at least partially separate the noise effects from the input effects in the actual output signal **110**.

Although **100** of **100** of

**100** of

A matrix **300** in **302** from the actual output signal **110**. As shown in **300** includes k samples **302** of the actual output signal **110**, and each column of the matrix **300** includes n−(k+1) samples **302** of the actual output signal **110**. In particular embodiments, the number of rows in the matrix **300** is much greater than the number of columns in the matrix **300**, although any suitable number of rows and/or columns may be used.

At least some of the samples **302** of the actual output signal **110** appear multiple times in the matrix **300**. For example, the sample **302** labeled “y_{2}” appears twice in a diagonal pattern, and the sample **302** labeled “y_{3}” appears three times in a diagonal pattern. Overall, the matrix **300** includes n different samples **302** of the actual output signal **110**.

In this example, the matrix **300** represents a “column Hankel matrix.” In this type of matrix, the matrix includes a time series of samples **302** in the horizontal direction **304** (left to right) and a time series of samples **302** in the vertical direction **306** (top to bottom). Because the samples **302** in the horizontal direction **304** form a time series in the left-to-right direction, the matrix **300** represents a “forward” column Hankel matrix.

A different matrix **330** is shown in **332** of the input signal **104**. Each row includes k samples, and each column includes n−(k+1) samples. As with the matrix **300** in **330** in **330** includes a time series of samples **332** in the horizontal direction **334** and a time series of samples **332** in the vertical direction **336**. However, the samples **332** in the matrix **330** represent a time series of samples **332** in the opposite horizontal direction **334** (right to left), so the matrix **330** represents a “backward” column Hankel matrix.

To isolate the effects of noise **108** in the actual output signal **110** from the effects of the input signal **104**, the controller **112** may generate the matrices **300**, **330** using the samples **302**, **332** of the actual output signal **110** and the input signal **104**. The controller **112** then generates a matrix **360**, which is shown in **360** includes both the backward column Hankel matrix **330** representing the input signal **104** and a forward column Hankel matrix **300** representing the actual output signal **110**. After generating the matrix **360**, the controller **112** or other monitor decomposes the matrix **360** using CQR decomposition to project the matrix **360** into orthogonal space. The projection at least partially separates the noise effects from the input effects in the actual output signal **110**.

Although **300** in **330** in

**100** of

**400** associated with a matrix **360**, where the left portion of the matrix **360** represents a backward column Hankel matrix of the input signal **104** and the right portion represents a forward column Hankel matrix of the ideal output signal **106**. In this example, the matrix **360** is denoted using the notation:

[U_{b}Ŷ]

where U represents a column Hankel matrix of the input signal **104**, Ŷ represents a column Hankel matrix of the ideal output signal **106**, and b indicates that a matrix is a backward column Hankel matrix. By default, any matrix without a b sub-notation represents a forward column Hankel matrix.

In this example, the matrix **360** is decomposed using CQR decomposition so as to project the matrix **360** into orthogonal space. The orthogonal space is defined by three axes **402**, **404**, **406**. The first axis **402** represents an index of the rows in the decomposed matrix, and the second axis **404** represents an index of the columns in the decomposed matrix. Both indexes increase moving from left to right in **406** represents the values contained in the decomposed matrix **360**.

As shown in **400** of the matrix **360** includes two different portions **408** and **410**. The first portion **408** represents the input signal **104**, and the second portion **410** represents the ideal output signal **106**. Because the second portion **410** represents the ideal output signal **106**, the second portion **410** represents only the effects of the input signal **104** without any effects of noise or other disturbances **108**.

In contrast, **420** associated with a matrix **360**, where the left portion of the matrix **360** represents a backward column Hankel matrix of the input signal **104** and the right portion represents a forward column Hankel matrix of the actual output signal **110**. In this example, the matrix **360** is denoted using the notation:

[U_{b}Ŷ]

where Y represents a column Hankel matrix of the actual output signal **110**.

In this example, the matrix **360** is decomposed using CQR decomposition so as to project the matrix **360** into the same orthogonal space. As shown in **420** of the matrix **360** includes three different portions **428**, **430**, **432**. The first portion **428** represents the input signal **104**. The second portion **430** substantially represents the portion of the actual output signal **110** caused by the input signal **104**. In other words, the second portion **430** of the projection **420** substantially represents the ideal output signal **106**. The third portion **432** substantially represents the noise **108** contained in the actual output signal **110**. Because the projection **420** substantially separates the response of the system **102** to the input signal **104** from the effects of noise **108**, the controller **112** may more accurately process the actual output signal **110**.

**360** that include a backward column Hankel matrix on the left side and a forward column Hankel matrix on the right side. Other matrices could be produced and then decomposed according to particular needs. For example, **440** associated with a matrix **360**, where the left portion of the matrix **360** represents a forward column Hankel matrix of the input signal **104** and the right portion represents a backward column Hankel matrix of the actual output signal **110**. In this example, the matrix **360** is denoted using the notation:

[U Y_{b}].

In this example, the matrix **360** is decomposed using CQR decomposition so as to project the matrix **360** into the orthogonal space. As shown in **440** of the matrix **360** includes three different portions **448**, **450**, **452**. The first portion **448** represents the input signal **104**. The second and third portions **450**, **452** represent the portion of the actual output signal **110** caused by the input signal **104** and the portion of the actual input signal **110** caused by noise **108**. However, the second and third portions **450**, **452** are interlaced.

Similarly, **460** associated with a matrix **360**, where the left portion of the matrix **360** represents a forward column Hankel matrix of the input signal **104** and the right portion represents a forward column Hankel matrix of the actual output signal **110**. In this example, the matrix **360** is denoted using the notation:

[U Y].

In this example, the matrix **360** is decomposed using CQR decomposition so as to project the matrix **360** into the orthogonal space. As shown in **460** of the matrix **360** includes four different portions **468**, **470** *a*-**470** *b*, **472**. The first portion **468** represents the input signal **104**. The second and third portions **470** *a*-**470** *b *substantially represent the portion of the actual output signal **110** caused by the input signal **104**. As shown in **110** caused by the input signal **104** has been dissected into two different parts **470** *a *and **470** *b*. The fourth portion **472** substantially represents the noise **108** contained in the actual output signal **110**.

Finally, **480** associated with a matrix **360**, where the left portion of the matrix **360** represents a backward column Hankel matrix of the input signal **104** and the right portion represents a backward column Hankel matrix of the actual output signal **110**. In this example, the matrix **360** is denoted using the notation:

[U_{b}Y_{b}].

In this example, the matrix **360** is decomposed using CQR decomposition so as to project the matrix **360** into the orthogonal space. As shown in **480** of the matrix **360** includes three different portions **488**, **490**, **492**. The first portion **488** represents the input signal **104**. The second and third portions **490**, **492** represent the portion of the actual output signal **110** caused by the input signal **104** and the portion of the actual input signal **110** caused by noise **108**. However, the second and third portions **490**, **492** are interlaced.

Using one or more of these projections, the controller **112** or other monitor in the system **100** of **112** or other monitor could use the projection **420** in **104** in the actual output signal **110** and the effects of the noise **108** in the actual output signal **110**. The controller **112** or other monitor could use this information in any suitable manner. For example, the controller **112** could disregard the effects of the noise **108** in the actual output signal **110** and process only the effects of the input signal **104** in the actual output signal **110**. As another example, the controller **112** or other monitor could use this information to identify relationships between the input and output signals.

As can be seen in **360** used to form the projections shown in **300**, **330** might have 1,000 columns (k) and 8,999 rows (n−(k+1)), and the matrix **360** would have 2,000 columns and 8,999 rows.

In some embodiments, to reduce the processing power and time needed by the controller **112** to process the signals, the controller **112** processes the samples in batches. For example, the controller **112** could process samples of the input signal **104** and actual output signal **110** in batches of five hundred samples each.

To help reduce the size of the matrix needed to generate a projection, the controller **112** may generate and process a first matrix **360** associated with a first batch of the samples. The first matrix **360** is decomposed into Q_{1 }and R_{1}. To process the next batch of samples, the controller **112** generates a matrix **360** for the next batch of samples and combines that matrix **360** with R_{1}. For example, the controller **112** could combine a new matrix **360** with a previous R matrix to create a concatenated matrix as follows:

where x represents the number of the current data segment (where x≧2), Data_{x }represents the data samples in the x-th data segment, and R_{x−1 }represents the R matrix associated with the (x−1)-th data segment. The matrix resulting from this combination is then processed by the controller **112** and decomposed. This allows the controller **112** to process a smaller matrix, even as the total number of samples becomes very large.

In the example above, the samples in the previous data segments are continuously carried through the processing of future data segments. In effect, the controller **112** is concatenating the data segments together, and the projection corresponding to the x-th data segment represents all previous data segments. In other embodiments, the samples in previous data segments may be phased out of the processing of future data segments. In effect, this provides a “forgetting factor” where older data segments contribute less to the projection than newer data segments. For example, the controller **112** could combine a new matrix **360** with a previous R matrix as follows:

where λ represents a value between zero and one. A λ value of one would operate as described above. A λ value of zero causes the controller **112** to ignore the previous R matrix and only process the current data segment. A λ value between zero and one causes the controller **112** to partially consider the previous R matrix in forming the projection, which over time reduces the effects of older data segments to a greater and greater extent.

Although **100** of **100** or other system would have different projections.

**500** for isolating noise effects in a signal according to one embodiment of this disclosure. For ease of explanation, the method **500** is described with respect to the controller **112** operating in the system **100** of **500** could be used by any other apparatus or device in any system.

The controller **112** receives samples of an input signal at step **502**. This may include, for example, the controller **112** receiving samples of an input signal **104** or the controller **112** receiving the input signal **104** and generating the samples.

The controller **112** receives samples of an actual output signal at step **504**. This may include, for example, the controller **112** receiving samples of an actual output signal **110** or the controller **112** receiving the actual output signal **110** and generating the samples.

The controller **112** generates a first matrix using the samples of the input signal at step **506**. This may include, for example, the controller **112** generating a forward or backward column Hankel matrix **330** using the samples of the input signal **104**.

The controller **112** generates a second matrix using the samples of the actual output signal at step **508**. This may include, for example, the controller **112** generating a forward or backward column Hankel matrix **300** using the samples of the actual output signal **110**.

The controller **112** generates a third matrix using the first and second matrices at step **510**. This may include, for example, the controller **112** generating a third matrix **360** by concatenating the first and second matrices **300**, **330**.

The controller **112** projects the third matrix into orthogonal space at step **512**. This may include, for example, the controller **112** performing CRQ decomposition to project the third matrix **360** into orthogonal space. This may also include the controller **112** generating a projection as shown in

At this point, the controller **112** may use the projection in any suitable manner. For example, the controller **112** could use the projection to identify a model that relates the input signal **104** to the ideal output signal **106** contained in the actual output signal **110**.

Although **500** for isolating noise effects in a signal, various changes may be made to **112** could generate the third matrix at step **510** directly after the samples are collected at steps **502**, **504**. In this example, the controller **112** need not generate the first and second matrices at steps **506**, **508**.

_{b}Y] matrix shown in

In general, the controller **112** may perform model identification to model the behavior of the monitored system **102**. The monitored system **102** typically may be represented in many different forms. In a particular embodiment, the monitored system **102** is modeled using a state-space model of the form:

*x* _{k+1} *=A*x* _{k} *+B*u* _{k }

*y* _{k} *=C*x* _{k} *+D*u* _{k }

where u represents samples of the input signal **104**, x represents the states of the monitored system **102**, y represents the output of the system **102**, and {A,B,C,D} are matrices that represent the parameters of the system **102**. In this embodiment, the controller **112** performs model identification by determining values for {A,B,C,D}.

In some embodiments, to perform model identification, the controller **112** generates a projection **420** as shown in **104**, **110** associated with the monitored system **102**. In particular embodiments, the controller **112** identifies values for A, B, C, and D by selecting one or more regions from the projection. Using the selected region(s), the controller **112** identifies pole candidates (values for A and C) and model candidates (values for B and D) for the monitored system **102**. The controller **112** then performs model validation and order reduction by selecting poles and an order for the monitored system **102**. The selected poles and order are used as the model of the monitored system **102**.

**602** and an upper triangular matrix **604**. As shown in **604** includes two diagonals **606** *a *and **606** *b*. As described above, the values along the diagonal **606** *a *are each greater than or equal to zero. The diagonals **606** divide the upper triangular matrix **604** into upper, lower, left, and right sections.

In some embodiments, to identify possible poles of the monitored system **102**, the controller **112** defines one or more areas **608** *a*-**608** *c *in the upper triangular matrix **604**. Although **608** *a*-**608** *c*, any number of areas **608** could be defined. In particular embodiments, using the one or more areas **608** *a*-**608** *c*, the controller **112** identifies the possible poles using the following algorithm:

[V,S,U]=svd(R_{2}′,0)

U_{1}=U(:,1:n)

[N_{g},n ]=size(U_{1})

*g=U* _{1}*diag(sqrt(*ss*(1:*n*)))

*gm=g*(1*:N* _{g} *−N* _{out},:)

C=g(1:N_{out},:)

*A=gm\g*(*N* _{out}+1*:N* _{g},:)

Poles=eig(A).

In this algorithm, [V,S,U] represents V, S, and U matrices produced using singular value decomposition (the svd function call). U_{1 }represents the values along the left-most n columns of the U matrix. The value n represents an order of the monitored system **102** and may be specified by the user, determined by threshold the singular values in the S matrix, or determined in any other suitable manner. N_{g }represents the number of rows in U_{1}. N_{out }represents the number of outputs in the monitored system **102**. The variable g represents an observability matrix. The variable g_{m }represents a shortened observability matrix. A and C are part of the parameter matrices for representing the monitored system **102**. The variable Poles represents the eigenvalues of the matrix A, which are the possible poles of the model. In general, if multiple areas **608** are used with the above algorithm, the number of possible poles candidate increases.

Once candidates for the poles (A and C) of the model have been identified, the controller **112** identifies the model candidates (B and D). As shown in _{fb } **610** *a*-**610** *c *can be identified in the upper triangular matrix **604**. The bottom right corner of each of these matrices **610** *a*-**610** *c *lies on the diagonal **606** *a *and is parallel to the top of the corresponding R_{2 }area **608** *a*-**608** *c*. The size of the matrices **610** *a*-**610** *c *can then be selected based on the corresponding size of R_{2}.

Each of these matrices **610** *a*-**610** *c *can be rewritten from a backward matrix U_{fb }into a forward matrix U_{f}. The B and D values of the model may then be determined using the following formula:

min_{B,D}∥(*I−U* _{1} *U* _{1} ^{t})*R* _{2} ^{t} *−L*(*B,D*)*U* _{f}∥_{2} ^{2 }

where U_{1 }represents the matrix used above to find the pole candidates, I represents an identity matrix, and L(B,D) represents a matrix defined as a function of B and D.

In particular embodiments, the L(B,D) matrix has the following format:

where Γ_{x }represents an order-x extended observability matrix, H_{x }represents an order-x block impulse response matrix, and χ denotes a pseudo-inverse. Examples of Γ_{x }and H_{x }include:

In many instances, the formulas and algorithm shown above identify the same model (same values for A, B, C, and D) regardless of the R_{2 }area **608** selected. In other instances, such as when a monitored system **102** suffers from drift, a validation step may be used to remove this undesired effect on the quality of the model selected. For example, in particular embodiments, the following equation is used during the validation step:

min_{p} _{ i } _{∈{p} _{ 1 } _{. . . p} _{ n } _{}}∥R_{E3}∥_{2} ^{2 }

where p_{i }represents the i-th pole of the pole candidate set, and R_{E3 }is a function of the model parameters A, B, C, and D. As shown in _{E3 }can be identified by performing CQR decomposition on a matrix formed by a backward Hankel matrix (U_{b}) of the input signal **104** and a forward Hankel matrix (Y−Ŷ) that is made of the prediction error (y−ŷ).

Although _{2 }areas **608** and U_{fb }matrices **610** may be used. Also, the validation step described above may be performed during every model identification, during none of the model identifications, or during some of the model identifications. In addition, other techniques for using the projections of

**700** for modeling relationships between signals according to one embodiment of this disclosure. For ease of explanation, the method **700** is described with respect to the controller **112** operating in the system **100** of **700** could be used by any other apparatus or device in any system.

The controller **112** forms a projection associated with two or more signals at step **702**. This may include, for example, the controller **112** generating a projection as shown in one of

The controller **112** selects one or more regions in the projection at step **704**. This may include, for example, the controller **112** identifying one or more areas **608** in the projection. The controller **112** could select one or multiple areas **608** based, for example, on user input, a default number and definition of the areas **608**, or in any other suitable manner.

The controller **112** identifies one or more pole candidates for the model using the projection at step **706**. This may include, for example, the controller **112** using the algorithm shown above in Paragraph [074] to identify possible values for the poles. This may also include the controller **112** using the selected area(s) of the projection to identify the possible poles. The controller **112** could use any other suitable technique to identify values for the poles.

The controller **112** identifies one or more model candidates for the model using the projection at step **708**. This may include, for example, the controller **112** using the formulas shown above in Paragraphs [076] and [077] to identify values for the model candidates. This may also include the controller **112** using the selected area(s) of the projection and various information generated during identification of the pole candidates to identify the model candidates. The controller **112** could use any other suitable technique to identify values for the model candidates.

The controller **112** performs model validation and order reduction if necessary at step **710**. This may include, for example, the controller **112** using the validation step described above in Paragraph [078] to validate the identified model. This may also include the controller **112** performing system-order reduction to reduce the order of the identified model. However, as described above, the same model may be produced regardless of which R_{2 }area **608** is used by the controller **112** in particular situations. As a result, in these situations, the controller **112** could skip step **710**.

At this point, the controller **112** could use the identified model in any suitable manner. For example, the controller **112** could use the model to “de-noise” the actual output signal **110**, which is labeled Y. As a particular example, the controller **112** could identify a model having the highest order as plausible. The controller **112** then uses the model to predict what the actual output signal **110** would look like without any noise or other disturbances **108**. The predicted signal is referred to as Ŷ. The controller **112** then defines the noise or drift called e in the actual output signal **110** using the formula:

*e=Y−Ŷ*, or

*Ŷ=Y−E. *

Here, the signal defined by Ŷ can be explained by the input signal **104**, and the noise or drift defined by e is not explained by the input signal **104**. This represents one example use of the identified model. The controller **112** could use the identified model in any other suitable manner.

Although **700** for modeling relationships between signals, various changes may be made to **112** could receive a projection produced by another component in the system **100** and process the projection. In these embodiments, the controller **112** need not generate the projection at step **702**.

It may be advantageous to set forth definitions of certain words and phrases used throughout this patent document. The terms “include” and “comprise,” as well as derivatives thereof, mean inclusion without limitation. The term “or” is inclusive, meaning and/or. The phrases “associated with” and “associated therewith,” as well as derivatives thereof, may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, or the like. The term “controller” means any device, system or part thereof that controls at least one operation. A controller may be implemented in hardware, firmware, software, or some combination of at least two of the same. The functionality associated with any particular controller may be centralized or distributed, whether locally or remotely.

While this disclosure has described certain embodiments and generally associated methods, alterations and permutations of these embodiments and methods will be apparent to those skilled in the art. Accordingly, the above description of example embodiments does not define or constrain this disclosure. Other changes, substitutions, and alterations are also possible without departing from the spirit and scope of this disclosure, as defined by the following claims.

function [Q,R]=CQR_H(A,save) | ||

% Usage: [Q,R]=CQR_H(A) | ||

% Q=CQR_H(A) | ||

% This is CQR Household algorithm. | ||

It is as economical as | ||

% the standard QR Household algorithm. | ||

%for testing the algorithm accuracy | ||

A0=A; | ||

[n,m]=size (A); | ||

nl=n+1; | ||

%% trianglize A | ||

mm=min (m,n-1) | ||

for j=1 :mm | ||

v=HouseHld (A(j:n,j)); | ||

A(j:n,j:m)=HousePre(A(j:n,j:m),v); | ||

A(j+1:n,j)=v(2: (n1-j)); | ||

end | ||

if nargout <=1 | ||

Q=A; | ||

else | ||

if nargin < 2 | n <= m | ||

R=zeros(size(A)) | ||

Q=eye (n); | ||

ncol = n; | ||

elseif save == 0 & n > m | ||

R=zeros (m,m); | ||

Q=[eye (m) ;zeros (n-m,m)] | ||

ncol = m; | ||

else | ||

error (‘input format error’) | ||

end | ||

for j=mm:−1:1 | ||

v=[1;A(j+1:n,j)] | ||

Q(j :n,j :ncol)=HousePre(Q(j :n,j :ncol) ,v); | ||

R (1 : j , j ) =A (1 : j , j ); | ||

if R(j,j) < 0; | ||

R(j,j :m) = −R(j,j :m); | ||

if nargout > 1 | ||

Q(j :n,j) = −Q(j :n,j); | ||

end | ||

end | ||

end | ||

for j=mm+1:m | ||

R(1:n,j)=A(1:n,j); | ||

end | ||

if m >= n & R(n, n) < 0; | ||

R(n,n:m) = −R(n,n:m); | ||

if nargout > 1 | ||

Q (:,n) = −Q(:,n) ; | ||

end | ||

end | ||

end | ||

function [v, P] =HouseHld(x,i) | ||

% v=HouseHld(x,i) | ||

n=length(x); | ||

nx=norm(x); | ||

v=zeros(size (x)); | ||

if nargin == 1, i=1; end | ||

ind= [1:i−1,i+1:n]; | ||

if nx > eps | ||

b=x(i)+sign (x(i))*nx; | ||

v(ind)=x (ind)/b; | ||

else | ||

v(ind)=x(ind); | ||

end | ||

v(i)=1; | ||

if nargout > 1 | ||

P=eye(n)−(2*v)*(v′/Cv′*v)); | ||

end | ||

function A=HousePre(A,v) | ||

% Usage: Ap=HousePre (A,v) | ||

% Pre-multiply the Householder transformation P(v) to A | ||

% Ap = P(v)*A | ||

A = A + ((−2/(v′*v))*v)*(v′*A); | ||

%A = A −((2/(v′*v))*v)*(v′*A); | ||

% = (I −2/(v′*v)*(v*v′)) * A | ||

% thus, | ||

% P(v) = I −2/(v′*v)*(v*v′) −> symmetric | ||

Patent Citations

Cited Patent | Filing date | Publication date | Applicant | Title |
---|---|---|---|---|

US4740968 * | Oct 27, 1986 | Apr 26, 1988 | International Business Machines Corporation | ECC circuit failure detector/quick word verifier |

US5490516 | May 6, 1993 | Feb 13, 1996 | Hutson; William H. | Method and system to enhance medical signals for real-time analysis and high-resolution display |

US5706402 * | Nov 29, 1994 | Jan 6, 1998 | The Salk Institute For Biological Studies | Blind signal processing system employing information maximization to recover unknown signals through unsupervised minimization of output redundancy |

US5980097 * | Feb 6, 1997 | Nov 9, 1999 | U.S. Philips Corporation | Reduced complexity signal converter |

US5991525 * | Aug 22, 1997 | Nov 23, 1999 | Voyan Technology | Method for real-time nonlinear system state estimation and control |

US6026334 | Jul 29, 1997 | Feb 15, 2000 | Weyerhaeuser Company | Control system for cross-directional profile sheet formation |

US6510354 * | Apr 19, 2000 | Jan 21, 2003 | Ching-Fang Lin | Universal robust filtering process |

US6564176 * | Aug 17, 2001 | May 13, 2003 | Nonlinear Solutions, Inc. | Signal and pattern detection or classification by estimation of continuous dynamical models |

US6615164 * | Apr 15, 1999 | Sep 2, 2003 | Synopsys Inc. | Method and apparatus for representing integrated circuit device characteristics using polynomial equations |

US6622117 * | May 14, 2001 | Sep 16, 2003 | International Business Machines Corporation | EM algorithm for convolutive independent component analysis (CICA) |

US6757569 * | Apr 20, 2001 | Jun 29, 2004 | American Gnc Corporation | Filtering process for stable and accurate estimation |

US6907513 * | Mar 20, 2001 | Jun 14, 2005 | Fujitsu Limited | Matrix processing method of shared-memory scalar parallel-processing computer and recording medium |

US7003380 * | Feb 27, 2002 | Feb 21, 2006 | Sikorsky Aircraft Corporation | System for computationally efficient adaptation of active control of sound or vibration |

US7035357 | Jun 7, 2001 | Apr 25, 2006 | Stmicroelectronics N.V. | Process and device for extracting digital data contained in a signal conveyed by an information transmission channel, in particular for a cellular mobile telephone |

US7089159 * | Apr 2, 2001 | Aug 8, 2006 | Nec Electronics Corporation | Method and apparatus for matrix reordering and electronic circuit simulation |

US20030004658 * | Mar 20, 2001 | Jan 2, 2003 | Simmonds Precision Products, Inc. | Reducing vibration using QR decomposition and unconstrained optimization |

US20030061035 | Nov 9, 2001 | Mar 27, 2003 | Shubha Kadambe | Method and apparatus for blind separation of an overcomplete set mixed signals |

US20040057585 * | Sep 23, 2002 | Mar 25, 2004 | Anton Madievski | Method and device for signal separation of a mixed signal |

US20040071103 * | Feb 22, 2002 | Apr 15, 2004 | Pertti Henttu | Method and arrangement for interferance attenuation |

US20040071207 | Oct 25, 2001 | Apr 15, 2004 | Skidmore Ian David | Adaptive filter |

US20040078412 * | Oct 2, 2003 | Apr 22, 2004 | Fujitsu Limited | Parallel processing method of an eigenvalue problem for a shared-memory type scalar parallel computer |

US20050015205 * | Jul 11, 2001 | Jan 20, 2005 | Michael Repucci | Method and system for analyzing multi-variate data using canonical decomposition |

Non-Patent Citations

Reference | ||
---|---|---|

1 | * | Cardoso, J.F. "Blind signal separation: statistical principles" Proc. of the IEEE vol. 9, No. 10, pp. 2009-2026. |

2 | Cho Y.M. et al., "Fast recursive identification of state space models via exploitation of displacement structure," Automatica, vol. 30, No. 1, Jan. 1994, p. 45-49. | |

3 | Dooren, Paul, "Numerical Linear Algebra for Signals Systems and Control," Apr. 24, 2003, 161 pages. | |

4 | Gilbert Strang, "Introduction to Linear Algebra," Third Edition, 10 pages. 2003. | |

5 | Hanna, Magdy T., Multiple signal Extraction by Multiple Interference Attenuation in the Presence of Random Noise in Seismic Array Data, IEEE Transactions on Signal Processing IEEE USA, vol. 51, No. 7, Jul. 2003, p. 1683-1694. | |

6 | http://www.alglib.net/matrixops/general/svd.php, 4 pages, 2007. | |

7 | http://www.netlib.org/lapack/lug/node53.html, 3 pages, 1999. | |

8 | Ku et al., "Preconditioned Iterative Methods for solving Toeplitz-Plus-Hankel Systems," IEEE, 1992, pp. 109-112. | |

9 | Moonen et al., "On-and off-line identification of linear state-space models," Int. J. Control, 1989, vol. 49, No. 1, 8 pages. | |

10 | Mosises Salmeron et al., "SSA, SVD, QR-cp, and RBF Model Reduction," ICANN 2002, pp. 589-594, 2002. | |

11 | Olshevsky et al., "Matrix-vector Product for Confluent Cauchy-like Matrices with Application to Confluent Rational Interpolation," ACM, 2000, pp. 573-581. | |

12 | * | Sarajedini et al., "Blind signal separation with a projection pursuit index" 1998, IEEE, pp. 2125-2128. |

13 | Sima V. et al., "Efficient numerical algorithms and software for subspace-based system identification," Proceedings of the 2000 IEEE Int'l Symposium, Sep. 25-27, 2000, p. 1-6. | |

14 | * | Swinnen et al. "Detection and multichannel SVD-based filtering of trigeminal somatosensory evoked potentials" Medical & Biological Engineering & Computing, 2000, vol. 38, pp. 297-305. |

15 | Swinnen et al., "Detection and multichannel SVD-based filtering of trigeminal somatosensory evoked potentials," Medical & Biological Engineering & Computing, 2000, vol. 38, pp. 297-305. | |

16 | Usefi et al., "A Note on Minors of a Generalized Hankel Matrix," Intern. Math. Journal, vol. 3, 2003, No. 11, pp. 1197-1201. | |

17 | * | Zhang et al. "Blind Deconvolution of Dynamical Systems: A State-Space Approach" Journal of Signal Processing vol. 4, No. 2, Mar. 2000, pp. 111-130. |

Referenced by

Citing Patent | Filing date | Publication date | Applicant | Title |
---|---|---|---|---|

US8203485 * | Jun 19, 2012 | Fujitsu Limited | Method of estimating direction of arrival and apparatus thereof | |

US8964338 | Jan 9, 2013 | Feb 24, 2015 | Emerson Climate Technologies, Inc. | System and method for compressor motor protection |

US8974573 | Mar 15, 2013 | Mar 10, 2015 | Emerson Climate Technologies, Inc. | Method and apparatus for monitoring a refrigeration-cycle system |

US9017461 | Mar 15, 2013 | Apr 28, 2015 | Emerson Climate Technologies, Inc. | Method and apparatus for monitoring a refrigeration-cycle system |

US9021819 | Mar 15, 2013 | May 5, 2015 | Emerson Climate Technologies, Inc. | Method and apparatus for monitoring a refrigeration-cycle system |

US9023136 | Mar 15, 2013 | May 5, 2015 | Emerson Climate Technologies, Inc. | Method and apparatus for monitoring a refrigeration-cycle system |

US9046900 | Feb 14, 2013 | Jun 2, 2015 | Emerson Climate Technologies, Inc. | Method and apparatus for monitoring refrigeration-cycle systems |

US9081394 | Mar 15, 2013 | Jul 14, 2015 | Emerson Climate Technologies, Inc. | Method and apparatus for monitoring a refrigeration-cycle system |

US9086704 | Mar 15, 2013 | Jul 21, 2015 | Emerson Climate Technologies, Inc. | Method and apparatus for monitoring a refrigeration-cycle system |

US9121407 | Jul 1, 2013 | Sep 1, 2015 | Emerson Climate Technologies, Inc. | Compressor diagnostic and protection system and method |

US9140728 | Oct 30, 2008 | Sep 22, 2015 | Emerson Climate Technologies, Inc. | Compressor sensor module |

US9194894 | Feb 19, 2013 | Nov 24, 2015 | Emerson Climate Technologies, Inc. | Compressor sensor module |

US9285802 | Feb 28, 2012 | Mar 15, 2016 | Emerson Electric Co. | Residential solutions HVAC monitoring and diagnosis |

US9304521 | Oct 7, 2011 | Apr 5, 2016 | Emerson Climate Technologies, Inc. | Air filter monitoring system |

US9310094 | Feb 8, 2012 | Apr 12, 2016 | Emerson Climate Technologies, Inc. | Portable method and apparatus for monitoring refrigerant-cycle systems |

US9310439 | Sep 23, 2013 | Apr 12, 2016 | Emerson Climate Technologies, Inc. | Compressor having a control and diagnostic module |

US20110050500 * | Mar 3, 2011 | Fujitsu Limited | Method of estimating direction of arrival and apparatus thereof |

Classifications

U.S. Classification | 703/2, 702/196, 702/191, 702/56, 700/55, 702/190, 702/197, 700/54, 381/94.1 |

International Classification | G05B23/02, G05B13/00, G05B13/02, G06F7/00, G05B17/02, G01F17/00, G05B13/04, G01F19/00, G06F17/00, G06F17/16, G06F7/60, G06F17/10, H04B15/00 |

Cooperative Classification | G05B17/02 |

European Classification | G05B17/02 |

Legal Events

Date | Code | Event | Description |
---|---|---|---|

Jul 12, 2004 | AS | Assignment | Owner name: HONEYWELL INTERNATIONAL INC., NEW JERSEY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LU, JOSEPH Z.;REEL/FRAME:015565/0909 Effective date: 20040709 |

Oct 20, 2009 | CC | Certificate of correction | |

Jan 25, 2013 | FPAY | Fee payment | Year of fee payment: 4 |

Rotate