Publication number | US20050018540 A1 |
Publication type | Application |
Application number | US 10/748,690 |
Publication date | Jan 27, 2005 |
Filing date | Dec 30, 2003 |
Priority date | Feb 3, 1997 |
Also published as | US6292433, US6552964, US6671227, US20020012289, US20020064093 |
Publication number | 10748690, 748690, US 2005/0018540 A1, US 2005/018540 A1, US 20050018540 A1, US 20050018540A1, US 2005018540 A1, US 2005018540A1, US-A1-20050018540, US-A1-2005018540, US2005/0018540A1, US2005/018540A1, US20050018540 A1, US20050018540A1, US2005018540 A1, US2005018540A1 |
Inventors | Jeffrey Gilbert, Alice Chiang, Steven Broadstone |
Original Assignee | Teratech Corporation |
Export Citation | BiBTeX, EndNote, RefMan |
Patent Citations (49), Referenced by (15), Classifications (20) | |
External Links: USPTO, USPTO Assignment, Espacenet | |
This application is a continuation application of U.S. application Ser. No. 09/921,976, filed on Aug. 21, 2001, issuing on Dec. 30, 2003, as U.S. Pat. No. 6,671,227, which is a continuation application of U.S. application Ser. No. 09/364,699, filed on Jul. 30, 1999, which is a continuation application of International Application No. PCT/US98/02291, filed on Feb. 3, 1998, now Publication No. WO 98/34294, which is a continuation-in-part application of U.S. Ser. No. 08/965,663 filed on Nov. 6, 1997, now U.S. Pat. No. 6,111,816, issued Aug. 29, 2000, which claims the benefit of U.S. Provisional Patent Application No. 60/036,387, filed on Feb. 3, 1997, the entire teachings of the above applications being incorporated herein by reference.
One use of sensor arrays is to isolate signal components that are traveling from, or propagating to, a particular direction. They find use in a number of different applications. For example, sonar systems make use of sensor arrays to process underwater acoustic signals to determine the location of a noise source; arrays are also used in radar systems to produce precisely shaped radar beams. Array processing techniques for isolating received signals are known as beamforming and when the same or analogous principles are applied to focus the transmission of signals, the techniques are referred to as beamsteering.
Considering the process of beamforming in particular, it is typically necessary to use a fairly large number of signal processing components to form the desired directional beams. The signal from each sensor is typically divided into representative components by subjecting each signal to multiple phase shift, or time delay, operations which cancel the equivalent time delay associated with the respective relative position of the sensor in the array. To form the directional beam the time shifted signals from each sensor are then added together. The imparted time delays are chosen such that the signals arriving from a desired angular direction add coherently, whereas those signals arriving from other directions do not add coherently, and so they tend to cancel. To control the resulting beamwidth and sidelobe suppression, it is typical for each time delayed signal to be multiplied or “amplitude shaded” by a weighting factor which depends upon the relative position of the sensor in the array.
Beamforming in one dimension can thus be realized through a relatively straight forward implementation using a linear array of sensors and a beamforming processor, or beamformer, that delays each sensor output by the appropriate amount, weights each sensor output by multiplying by the desired weighting factor, and then sums the outputs of the multiplying operation. One way to implement such a beamformer is to use a tapped delay line connected to each array element so that the desired delay for any direction can be easily obtained by selecting the proper output tap. The beam steering operation then simply consists of specifying the appropriate tap connections and weights to be applied.
However, a beamforming processor becomes much more complex when a two dimensional sensor array is used. Not only does the number of time delay operations increase as the square of the size of the array, but also the physical structures required to connect each sensor to its corresponding delay becomes complex. At the same time, each delay unit must be provided with multiple taps for the formation of multiple beams. The problem can become prohibitively complicated when the simultaneous formation of multiple beams is required.
As to implementation choices, beamforming technology was originally developed for detection of acoustic signals in sonar applications. The beamformers built for these early sonars used analog delay lines and analog signal processing components to implement the sum and delay elements. Networks of resistors were then used to weight, and sum the appropriately delayed signals. However, the number of beams that can be implemented easily with such techniques is limited since each beam requires many discrete delay lines, or delay lines with many taps and many different weighting networks. As a result, it became common to share a delay line by using scanning switches to sequentially look in all directions. However, with this approach only one beam is available at a given time.
Recent advancements in integrated circuit electronics has provided the capability to implement practical digital beamforming systems. In these systems a signal from each sensor is first subjected to analog to digital conversion prior to beamforming. The beamformers are implemented using digital shift registers to implement the delay and digital multiplier components to implement the required weighting. The shift registers and multiplier components are typically controlled by command signals that are generated in general purpose computers using algorithms or equations that compute the values of the delays and phase weightings necessary to achieve the desired array beam position. Beam control thus requires fairly complex data processors and/or signal processors to compute and supply proper commands; this is especially the case if more than one beam is to be formed simultaneously.
For these reasons, few multi-dimensional multiple beam systems exit that can operate in real time with a minimum implementation complexity.
The invention is a beamsteering or beamforming device (generically, a beamforming device), that carries out multi-dimensional beamforming operations as consecutive one-dimensional operations. In a preferred embodiment the two operations are interposed by a transpose operation. For example, beamforming for a two-dimensional array of sensors is carried out as a set of projections of each desired output beam onto each of the two respective axes of the array.
Signal examples are periodically taken from each sensor in the array and then operated on as a group, or matrix, of samples. A first one-dimensional (1D) beamformer is used to form multiple beams for each sensor output from a given row of the sample matrix. The multiple output beams from the first 1D beamformer are then applied to a transposing operation which reformats the sample matrix such that samples originating from a given column of the sensor array are applied as a group to second one-dimensional (1D) beamformer.
The beamformer can be implemented in an architecture which either operates on the samples of the sensor outputs in a series of row and column operations, or by operating on the sample matrix in parallel. In the serial implementation, a group of multiplexers are used at the input of the first 1D beamformer. Each multiplexer sequentially samples the outputs of the sensors located in given column of the array. The multiplexers operate in time synchronization such that at any given time, the outputs from the group of the multiplexers provide samples from the sensors located in each row of the array.
The multiplexers then feed the first 1D beamformer that calculates the projection of each row onto a first array axis, for each of the desired angles. In the serial implementation, the first 1D beamformer is implemented as a set of tapped delay lines formed from a series of charge coupled devices (CCDs). Each delay line receives a respective one of the multiplexer outputs. A number of fixed weight multipliers are connected to predetermined tap locations in each delay line, with the tap locations determined by the set of desired angles with respect to the first array axis, and the weights depending upon the desired beam width and sidelobe suppression. Each output of the first 1D beamformer is provided by adding one of the multiplier outputs from each of the delay lines.
The serial implementation of the transposer uses a set of tapped delay lines with one delay line for each output of the first 1D beamformer. The tapped delay lines have a progressively larger number of delay stages. To provide the required transpose operation, samples are fed into the delay lines in the same order in which they are received from the first 1D beamformer; however, the samples are read out of the delay lines in a different order. Specifically, at a given time, the output of the beamformer are all taken from a specific set of the last stages of one of the delay lines.
Finally, the second 1D beamformer consists of a set of tapped delay lines, fixed weight multipliers and adders in the same manner as the first 1D beamformer. However, the weights and delays applied by the second 1D beamformer are determined by the set of desired angles to be formed with respect to a second axis of the array.
In a parallel implementation of the invention, the multiplexers are not used, and instead the outputs of the array are fed directly to a set of parallel processing elements which operate on samples taken from all of the sensors simultaneously. Each processing element produces a set of beamformed outputs that correspond to the samples taken from one of the rows of sensors beamformed at each of the desired angles with respect to the first array axis. In this parallel implementation, the transposing operation is carried out by simply routing the outputs of the processing elements in the first 1D beamformer to the appropriate inputs of the second 1D beamformer. The second 1D beamformer likewise is implemented as a set of parallel processing elements, with each processing element operating on beamformed samples corresponding to those taken from one of the columns of the array, beamformed at each of the desired angles with respect to the second array axis.
In another preferred embodiment of the invention, a low power time domain delay and sum beamforming processor involves programmable delay circuits in sequence to provide a conformal acoustic lens. This electronically adjustable acoustic conformed lens has a plurality of subarrays that can be separately controlled to adjust viewing angle and their outputs coherently summed for imaging.
The invention provides a substantial advantage over prior art beamformers. For example, a device capable of steering up to one hundred beams for a ten by ten sonar array can be implemented on a single integrated circuit chip operating at a relatively low clock rate of 3.5 MegaHertz (MHZ), representing a continuous equivalent throughput rate of approximately 14 billion multiply-accumulate operations per second.
This invention is pointed out with particularly in the appended claims. The above and further advantages of the invention may be better understood by referring to the following description in conjunction with the accompanying drawings, in which:
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular description of preferred embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention.
Turning attention now to the drawings,
The beamforming system 10 consists of a number of sensors 12 arranged in a planar array 14, a number, n, of multiplexers, 17-0, 17-1 . . . 17-(n−1), a first one-dimensional (1D) beamformer 18, a transposer 20, and a second 1D beamformer 22.
The array 14 consists of a number of sensors 12 arranged in an array of m rows 15-0, 15-1, 15-(m−1), each row having n sensors 12, and n columns 16-0, 16-1, 16-(n−1) having m sensors 12. The array may or may not be square, that is, n may or may not be equal to m.
The exact type of sensor 12 depends upon the particular use of the system 10. For example, in a system 10 intended for application to sonar, each sensor 12 is a hydrophone. In an application to radar systems, each sensor 12 is an antenna.
The remainder of the components of the system 10 operate to form multiple output beams 24 simultaneously. Before proceeding with a detailed description of the structure and operation of system 10, it is helpful to define a notation to refer to the various sensors 12 and as shown in
The notation Dx,v is used to refer to a beam formed using all of the sensors located in a given column, x, at a particular angle, v, with respect to array 14. Dw,y indicates a beam formed at a particular angle, w, using the sensors 12 in a given row y at an angle w with respect to the array. The notation Dw,v denotes the beam formed at a two dimensional angle (w,v) with respect to the array 14. Dw,v[t] indicates a beam formed at angles (w,v) at a time, t, or a depth, t, from the (x,y) plane of the array 14.
With reference now to
As can be seen from the illustration, the beam 26 formed at the angle (w,v) can be considered as having a pair of components projected upon two planes formed by the z axis and each of the array axes x and y. In particular, the beam 26 has a first component 26-1 in the xz plane forming an angles w with respect to the x axis, as well as a second component 26-2 in the yz plane forming an angle v with respect to the y axis.
This assumption that the beam 26 can be represented as a pair of components 26-1 and 26-2 projected onto the orthogonal planes xz and yz is based upon an assumption that a far field operation approximation is valid for processing signals received from the array 14. The far field approximation will be valid for an array 14 in most sonar applications, for example. In such applications, the sensors 12 may typically be spaced approximately one meter apart, with the sound source being located at a distance of 100 meters or farther away from the array 14. Therefore, the far field approximation assumption is valid in applications where the sensor spacing, 1, is much smaller than the distance from the source being sensed. A difference of at least two orders of magnitude between the array sensor spacing and the distance to the source is sufficient for the approximation to be valid.
The operations required to form a number of desired beams 26 at a number of angles (w,v) can thus be decomposed into a pair of successive one-dimensional operation on the sensor outputs. Beam steering in a given direction (w,v) is accomplished as the projection of the beam 26 onto the xz plane forming an angle w with the x axis, followed by a projection onto the yz plane forming an angle v with respect to the y axis.
Returning now to
(wi, vj) for I=o to n−1
The
(Di,j) for I=o to n−1
The first 1D beamformer 18 performs a beam forming operation along the x direction at each of the desired beam angles w0, w1, . . . , w(n−1). For example, the output Dw0, y0 represents the result of beamforming at a beam angle w0 the samples having a row coordinate of zero. That is, the output Dw0, y0, indicates the result of the beamforming operation on samples D0,0; D0,1; . . . , D0, (n−1) located in row 15-0 at one of the desired beam angles w0. Likewise, Dw1, y0 corresponds to the output of the 1D beamformer 18 at beam angle w1, and so on.
The first beamformed matrix 32 output by the first 1D beamformer 18 thus represent input samples Dx,y beamformed along the x axis with each of the respective desired beam angles w0, w1, . . . , w(n−1).
The transposer 20 transposes the rows and columns of the first beamformed matrix 32 to produce a transposed matrix 34. The transposed matrix 34 arranges the beamformed samples having the same corresponding y value located in a given column, and the beamformed samples having the same beam angle, w, located in a given row. This permits the second 1D beamformer to perform the 1D beamform operation on the samples in each row, with different angles vj, for j=0 to (m−1).
As a result, the output matrix 36 from the second 1D beamformer 22 represents the two-dimensional beamformed outputs 24, with the output Dw0,v0 representing the beam at angle (w0,v0), the output Dw0,v1 corresponding to the beam angle (w0,v1), and so on. In other words, the sample outputs from the second 1D beamformer 22 correspond to all two dimensional beams formed the desired angles
(wi,vj) for I=0 to n−1, and
Although
For the serial pipelined implementation of the invention, the matrices in
The leftmost column of the matrix 30 indicates the order of the outputs taken from the first multiplier 17-0 of
Since the first 1D beamformer 18 performs a 1D beamforming operation on the samples in a given row 15, the first 1D beamformer can be implemented as a pipelined device such that a new row of samples can be immediately applied to the device and the operation repeated.
The tapped delay lines 40 insert appropriate delays in the sensor outputs to account for relative propagation delays of a signal from a particular location. The delay lines 40 are each tapped such that the outputs from a certain number of delay stages are provided to the input of a multiplier 41.
The internal clock rate of each delay line 40 is ten times the input sample rate, fs, to permit the sampling of ten sensors into each tapped delay line 40. The total number of stages in each delay line 40 is sufficient to provide the maximum delay associated with forming a beam at the maximum required one of the angles, w. In the illustrated implementation, the total length of the delay line 40-0 shown is approximately 1350 stages, with ten tap positions set to provide 10 equally spaced apart angles, w. The position of the taps, that is the exact positions at which the inputs to the respective multipliers 41 is taken, depends upon the desired number of beams. The desired beam shape is defined by the weights applied to the multipliers 41.
Thus for an array 14 forming ten beams from each row 15 of input samples, the first 1D beamformer 18 consists of ten tapped delay lines, each delay line having ten taps and ten multipliers 41.
If the number and position of the desired beams is known in advance, the tap positions and constant values input as weights to the multipliers 41 can be hard wired or mask programmable.
The tapped delay lines 40 are preferably implemented as charge coupled device (CCD) type delay lines with fixed weight multipliers. A preferred implementation of this invention uses a non-destructive sensing type of charge domain device described in a co-pending U.S. patent application Ser. No. 08/580,427, filed Dec. 27, 1995 (MIT Case Number 7003), by Susanne A. Paul entitled “Charge Domain Generation and Replication Devices” the entire contents of which is hereby incorporated by reference.
The outputs of the multipliers 41 are then summed to accomplish the desired multiple simultaneous beam forming functions. The weighted outputs from the multipliers 40 are then simultaneously summed to form the desired beam output along a given row. For example, the output Dw0 is taken by summing the outputs of the last multipliers 41-0-9, 41-1-9, . . . , 41-9-9 associated with each of the tapped delay lines 40.
The number of delay stages within each of the delay lines 50 progressively increases as the column index. For example, the first tapped delay line 50-0 has a length which is one more than the number of rows, m, in the matrix, or 11 stages, the second delay line 50-1 is 12 stages long and so on until the 10th delay line 50-9 is 20 stages long. Only the last 10 stages of each delay line 50 are tapped to provide for outputs.
In operation, the taps associated with each delay line are enabled at the same time in a time slot associated with that delay line. For example, at a first time p0 all of the taps from the first delay line 50-0 are enabled in parallel to provide the ten output Dw0,y0; Dw0,y1; . . . Dw0,y9. At a second time p1, only the tap from the second delay line 50-1 are enabled. The operation continues until a time p9 at which the taps on the last delay line 50-9 are enabled.
The ten processing elements 140 thus operate in parallel to produce 100 outputs at the same time, Dw0,y0; Dw1,y0; . . . ; Dw9,y9 that represent the ten respective beams formed outputs along the x axis.
In this parallel implementation, the transposer 20 is simply the proper routing of the outputs of the first 1D beamformer 18 to the inputs of the second 1D beamformer 22. The second 1D beamformer 122 is implemented in much the same manner as the first 1D beamformer 118 and includes a bank of ten processing elements 142-0, 142-1 . . . 142-9. The ten processing elements 142 operate in parallel to produce the 100 beamformed outputs Dw0,v0; Dw1,v1; . . . ; Dw9,v9.
An exemplary parallel processing element 140-0 is shown in detail in
In this parallel implementation the clock rate of the delay lines 144 to accomplish real time processing may be ten times slower, for example, the clock rate need only be the same as the input sampling rate fs. However, the trade-off is that ten of the processing elements 140 are required to produce the necessary beamformed matrix 32.
Processing elements 142 associated with the second 1D beamformer 122 are similar to the exemplary processing element 140-0.
Finally with respect to
Another preferred embodiment of the invention relates to a time-domain delay-and-sum beamforming processor that can simultaneously process the returns of a large two dimensional transducer array. The lower-power, highly integrated beamformed is capable of real-time processing of the entire array and enables a compact, affordable unit suitable for many different applications. A delay-and-sum beamformer allows a 2D array to “look” for signals propagating in a particular direction. By adjusting the delays associated with each element of the array, the array's directivity can be electronically steered toward the source of radiation. By systematically varying the beamformer's delays and its shading along a 2D image plane, a 2D scan response of the illustrated array can be measured and resulting 2D images representing the 2D radiation sources can be created.
A schematic diagram of a time-domain beamforming device for a 3D ultrasound/sonar imaging system 300 is illustrated in
As shown in
The use of coded or spread spectrum signaling has gained favor in the communications community. It is now routinely used in satellite, cellular, and wire-line digital communications systems. In contrast, the application of this technique to acoustic systems has been prevented primarily due to signal propagation conditions and the relatively slow speed of sound in water (1500 m/s) or air when compared with electromagnetic propagation.
Despite these difficulties, the benefits of using coded signals in underwater acoustic systems, for example, offers the potential for higher-resolution imaging while significantly lowering the probability of external detection. These signals also provide signal processing gain that improves the overall system detection sensitivity.
Direct sequence modulation is the modulation of a carrier signal by a code sequence. In practice, this signal can be AM (pulse), FM, amplitude, phase or angle modulation. It can also be pseudorandom or PN sequence comprised of a sequence of binary values that repeat after a specified period of time.
The processing gain realized by using a direct sequence system is a function of the signal transmitted compared with the bit rate of the information. The computed gain is the improvement resulting from the RF to information bandwidth tradeoff. Using direct-sequence modulation, the process gain is equal to the ratio of the RF-spread spectrum signal bandwidth divided by the information rate in the baseband channel, G_{P}=BW_{RF}/R, where R is typically expressed in bits/Hz for digital communications.
The objective of a beamforming system is to focus signals received from an image point onto a transducer array. By inserting proper delays in a beamformer to wavefronts that are propagating in a particular direction, signals arriving from the direction of interest are added coherently, while those from other directions do not add coherently or cancel. For a multi-beam system, separate electronic circuitry is necessary for each beam.
Using conventional implementations, the resulting electronics rapidly become both bulky and costly as the number of beams increases. Traditionally, the cost, size, complexity and power requirements of a high-resolution beamformer have been avoided by “work-around” system approaches which form a number of transducer elements typically used in the sonar array. A typical configuration uses a center beam together with four adjacent beams aimed left, right, above and below the center. The beams are each formed from fifty or more elements in an array each phased appropriately for the coherent summation in the five directions of interest. The advantage of using so many elements is narrower beamwidths when compared with a smaller array, however, knowledge of the outside world is still based on a five pixel image. For real-time 3D high-resolution sonar imaging applications, a preferred embodiment utilizes an electronically steerable two-dimensional beamforming processor based on a delay-and-sum computing algorithm.
A delay-and-sum beamformer allows 2D array to “look” for signals propagating in particular directions. By adjusting the delays associated with each element of the array, the array's “look” direction or field or view can be electronically steered toward the source of radiation. By systematically varying the beamformer's delays and its shading or apodization along a 2D imaging plane, a 2D scan response of the array can be measured and resulting images representing the 2D radiation sources can be generated. To realize such a delay-and-sum beamformer, a programmable delay line is needed at each receiver. However, as the array is scanning through the imaging plane, there are two difficult implementation issues: first, each delay line has to be long enough to compensate for the path differences of a large area array, and second, the delay value has to be adjusted at each clock cycle for proper beam steer (i.e., the time-of-flight from the radiation source to the focal point has to be calculated at every clock cycle). For example, for a 10 m range requirement with a resolution of one to two centimeters dictates an array aperture in the range of 40 cm. To realize a thirty degree scanning volume, a maximum delay of 70 μs. This implies that a 2,300-stage delay line and a 12-bit control word are needed at each receiver to achieve the time-of-flight delay requirements. The long delay and large number of digital I/Os would set an upper limit on how many processors can be integrated on one chip. For example, for a 64-channel time domain beamforming electronics, a straightforward implementation would require 64 2,300-stage delay lines and 768 digital I/O pads. Such a large area chip and large number of I/O connections would make the implementation impractical.
An electronic beamforming structure is described to circumvent the impractically long delay line requirement and a delay-update computation based on the determination of time-of-flight surface coordinates is presented to reduce the digital I/O requirement. This electronic programmable beamforming structure functions as an electronically adjustable/controllable virtual acoustic lens. For this reason, this device is referred to herein as an electronically-controlled conformal lens.
An electronically-adjustable acoustic conformal lens uses a divided surface of a 2D transducer array in which plane “tiles” of relatively small subarrays are provided. As depicted in the embodiment of
A detailed diagram of an electomically-controlled beamforming system in accordance with the invention is shown in
Shown in
The down converter of
By systemically varying the beamformer's delays and its shading along a 2D imaging plane, a rectilinear 2D scan pattern 360 of the array can be measured and resulting 2D images representing the 2D radiation sources can be created, see
In real-time imaging applications, focus-and-steer images require knowledge of the time of flight from each source to each receiver in an array. To compute a new point on any time-of-flight surface requires finding the square root of the sum of squares, which is a computationally intensive task. A delay-update computation method can be used which reduces the determination of the rectangular coordinates of a new point on any time-of-flight surface to the computation time of a single addition. It is well-known that the method of moments can be used to synthesize basis functions that represent an arbitrary multidimensional function. Although the complete basis requires the determination of infinitely many coefficients, a finite-degree basis function can be generated using a least-mean-square (LMS) approximation. The specific form of the finite-degree basis depends on functional separability and limits of the region of support. Using the forward-difference representation of the truncated moments basis, a new functional value can be computed at every clock cycle. If the computation is performed within a square region of support, the direction of the finite difference corresponds to the direction that the function is computed. For example, functional synthesis from the upper-right to lower-left corners within the region of support implies the computation of a multidimensional, backward difference representation. Conversely the multi-dimensional, forward-difference representation, presented above, allows functional synthesis to proceed from the lower-left to the upper-left corners within the region of support. This approach produces images at least an order of magnitude faster than conventional time-of-flight computation.
In practice, the complete moments basis representation of a surface can be degree-limited for synthesis. One truncation method is to approximate f(x,y) with a bivariate polynomial of degree M. The bi-M^{th }degree approximation can be written as
where â can be derived based on the LMS criterion,
Once the coefficients â_{p,q }of the bi-Mth degree polynomial {circumflex over (ƒ)}(x,y) possess positive-integer powers of x and y, it can be formulated as a stable, forward-difference equation. In general, (M+1)^{2 }forward-difference terms are sufficient to describe a polynomial whose highest degree in x and y is M. The terms completely specify {circumflex over (ƒ)}(x,y) within its region of support.
Based on the assumption that the surface is to be scanned in a raster fashion and has been scaled, the step size is 1. For this case, the first and second forward differences in one dimension are
Δ_{x} ^{1}={circumflex over (ƒ)}(x _{0}+1, y _{0})−{circumflex over (ƒ)}(x _{0} , y _{0}),
Δ_{x} ^{2}={circumflex over (ƒ)}(x _{0}+2, y _{0})−2{circumflex over (ƒ)}(x _{0}+1, y _{0})+{circumflex over (ƒ)}(x _{0} ,y _{0})
Using these forward differences, a second-degree polynomial in one dimensional can be written in difference form as
It follows that the two-dimensional forward differences can be obtained by evaluating the cross product term in {circumflex over (ƒ)}(x,y),
A CMOS computing structure can be used to perform functional synthesis using the forward-difference representation of a multidimensional, finite-degree polynomial. This implementation allows the synthesis of arbitrary functions using repeated additions with no multiplications. An example of this computing structure 370 is presented in
Using this approach, instead of the alternative 11 bits/channel, the digital connectivity can be reduced to 1 bit-channel followed by on-chip computation circuitry to generate the equivalent 12 bit value while maintaining the 30 billion bits/s parallel update rate.
Preferred elements of a high performance ultrasound imaging system includes the ability to provide features such as 1) multi-zone transmit focus, 2) ability to provide different pulse shapes and frequencies, 3) support for a variety of scanning modes (e.g. linear, trapezoidal, curved-linear or sector), 4) multiply display modes such as M-mode, B-mode, Doppler sonogram and color-flow mapping (CFM). Preferred embodiment for such a system are based on the integrated beamforming chip described herein. All five systems can provide the desired capabilities described above, with different emphasis on physical size and power consumption.
In the system 400 shown in
Charge-domain processors 470 (CDP) for beamforming can also be fully integrated into a dedicated system, as shown in
A preferred embodiment for a compact scanhead that minimizes noise and cable loss is shown in
The semi-integrated front-end probe 482 described in
The multi-dimensional beamformer processing system is a time-domain processor that simultaneously processes the transmit pulses and/or returns of a two-dimensional array 502. For transmit beamforming, the system can be used either in a bi-static mode, utilizing a separate transmit transducer array 502, or it can use the receive array 504 for transmit focus as well. As shown in
The multi-channel transmit/receive chip performs the functions of transmit beamforming, switching between transmit receive modes (TRswitch), and high-voltage level shifting. As shown in
While typically the period of the transmit-chip clock determines the delay resolution, a technique called programmable subclock delay resolution allows the delay resolution to be more precise than the clock period. With programmable subclock delay resolution, the output of the frequency counter is gated with a phase of the clock that is programmable on a per-channel basis. In the simplest form, a two-phase clock is used and the out put of the frequency counter is either gated with the asserted or deasserted clock. Alternatively, multiple skewed clocks can be used. One per channel can be selected and used to gate the coarse timing signal from the frequency counter. In another implementation 560 shown in
By systematically varying beamformer delays and shading along a 2D imaging plane, a 2D scan response of a 2D transducer array can be measured and resulting 2D images representing the 2D radiation sources can be created. This method can be extended to scan not just a 2D plane but a 3D volume by systematically changing the image plane depth as time progresses, producing a sequence of 2D images, each generated by the 2D beamforming processors as described above. The sequence of images depicts a series of cross-section views of a 3D volume as shown in
The same sequential vs parallel receive beamforming architecture is applicable to a 1D linear or curved linear array.
A Doppler sonogram 620 can be generated using single-range-gate Doppler processing, as shown in
where c is the speed of sound in the transmitting medium and ƒ_{c }is the center frequency of the transducer. As an example, if N=6 and ƒ_{prf}=1 KH_{z}, the above equation can be used to generate a sonogram displaying 16 ms of Doppler data. If the procedure is repeated every N/f_{prf }seconds, a continuous Doppler sonogram plot can be produced.
The following relates to a pulse-Doppler processor for 3D color flow map applications. The pulsed systems described here can be used for interrogating flow patterns and velocities, such as the flow of blood within a vessel. The time evolution of the velocity distribution is presented as a sonogram, and different parts of the vessel can be probed by moving the range gate and varying its size. The ultimate goal for probing the circulatory system with ultrasound is to display a full map of the blood flow in real time. This enables the display of velocity profiles in vessels and the pulsatility of the flow. One step toward meeting this goal is to use color flow mapping (CFM) systems. They are an extension of the multigrated system described in the above paragraph, as the blood velocity is estimated for a number of directions (scan lines) in order to form an image of flow pattern. The velocity image is superimposed on a B-mode image, and the velocity is coded as color intensity and direction of flow is coded as color. For example, a red color indicates flow toward and blue flow away from the transducer. A color-flow map based on pulsed Doppler processing is shown here in
Algorithms can be used to compute the first moment and the velocity distribution of the pulse returns. Instead of a Fourier transform-based computation, a cross correlation technique, described in Jensen, Jorgen A., “Estimation of Blood Velocities Using Ultrasounds”, Cambridge Univ. Press, 1996, the entire contents of which is incorporated herein by reference, can also be used to produce a similar color flow map. Furthermore, an optimal mean velocity estimation can be used.
Mean velocity (i.e., first spectral moment) estimation is central to many pulse Doppler data processing. With applications such as Color Flow Map for displaying mean velocity, inherent requirements for high-scan rate and fine (azimuth) scan patterns restrict the allocation of pulse samples to but a small number per range cell. As a result, these applications operate at times near the fundamental limits of their estimation capabilities. For such specific needs, an optimal Doppler Centroid estimation in the case of known spectral width (SW) and signal-to-noise ratio (SNR) is described.
Let us consider the usual probabilistic model for pulse-Doppler observation of a complex-valued vector return, z_{1}, z_{2}, . . . , z_{N }corresponding to a single range cell with N equally-spaced samples of a complex Gaussian process with covariance matrix T=E[ZZ*]. We also adopt the common single-source sample-covariance model consisting of Gaussian-shaped signal plus uncorrelated additive noise:
r _{n} =Se ^{−8(πσ} ^{ v } ^{nπ)} ^{ 2 } e ^{−j4π{overscore (v)}nτ/λ} +V _{noise}δ_{n }(0≦n∠N)
where the model parameters {overscore (v )} and σ_{v }represents mean Doppler velocity and Doppler SW, λ is the transducer RF wavelength, and S and N respectively represent signal to noise power magnitudes. Let us define
In the case of maximum likelihood (ML) estimation, it results in a simple mean velocity expression
where r_{n} ^{r }is the weighted autocorrelation estimate defined by
where
and γ_{i,k }and the element of the matrix Γ,
The generic waveform for pulse-Doppler ultrasound imaging is shown in
The CDP device described here performs all of the functions indicated in the dotted box 662 of
In order to describe the application of the PDP to the Doppler filtering problem, we first cast the Doppler filtering equation into a sum of real-valued matrix operations. The Doppler filtering is accomplished by computing a Discrete Fourier Transform (DFT) of the weighted pulse returns for each depth of interest. If we denote the depth-Doppler samples g(kj), where
The weighting function can be combined with the DFT kernel to obtain a matrix of Doppler filter transform coefficients with elements given by
w(k,n)=w _{k,n} =v(n) exp(−j2πkn /N)
The real and imaginary components of the Doppler filtered signal can now be written as
In Eq. (4), the double-indexed variables may all be viewed as matrix indices. Therefore, in matrix representation, the Doppler filtering can be expressed as matrix product operation. It can be seen that the PDP device can be used to perform each of hte four matrix multiplications thereby implementing the Doppler filtering operation.
A block diagram of the PDP device is shown in
A two-PDP implementation for color flow mapping in an ultrasound imaging system is shown in
A software flow chart 740 for color-flow map computation based on the optimal mean velocity estimation described above is shown in
A software flow chart for color-flow map computation based on the cross-correlation computation 760 is shown in
While we have shown and described several embodiments in accordance with the present invention, it is to be understood that the invention is not limited thereto, but is susceptible to numerous changes and modifications as known to a person skilled in the art, and we therefore do not wish to be limited to the details shown and described herein but intend to cover all such changes and modifications as are obvious to one of ordinary skill in the art.
While this invention has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Cited Patent | Filing date | Publication date | Applicant | Title |
---|---|---|---|---|
US4034483 * | Jan 20, 1976 | Jul 12, 1977 | The Singer Company | Sonar beamforming apparatus simulation |
US4152678 * | Jul 1, 1976 | May 1, 1979 | Board of Trustees of the Leland Stanford Jr. Unv. | Cascade charge coupled delay line device for compound delays |
US4159462 * | Aug 18, 1977 | Jun 26, 1979 | General Electric Company | Ultrasonic multi-sector scanner |
US4173007 * | Jul 1, 1977 | Oct 30, 1979 | G. D. Searle & Co. | Dynamically variable electronic delay lines for real time ultrasonic imaging systems |
US4227417 * | Jun 13, 1977 | Oct 14, 1980 | New York Institute Of Technology | Dynamic focusing apparatus and method |
US4233678 * | Mar 12, 1979 | Nov 11, 1980 | The United States Of America As Represented By The Secretary Of The Navy | Serial phase shift beamformer using charge transfer devices |
US4241608 * | Jan 24, 1978 | Dec 30, 1980 | Unirad Corporation | Ultrasonic scanner |
US4244037 * | May 21, 1979 | Jan 6, 1981 | The United States Of America As Represented By The Secretary Of The Navy | Two dimensional imaging using surface wave acoustic devices |
US4245333 * | May 21, 1979 | Jan 13, 1981 | The United States Of America As Represented By The Secretary Of The Navy | Beamforming utilizing a surface acoustic wave device |
US4253168 * | Oct 23, 1978 | Feb 24, 1981 | Westinghouse Electric Corp. | CCD Signal processor |
US4254662 * | Aug 24, 1978 | Mar 10, 1981 | Hitachi Medical Corporation | Multiple acoustic beamformer step scanner |
US4267584 * | Jun 8, 1979 | May 12, 1981 | Siemens Gammasonics, Inc. | Permutating analog shift register variable delay system |
US4274148 * | Aug 29, 1979 | Jun 16, 1981 | Westinghouse Canada Limited | Digital time-delay beamformer for sonar systems |
US4277787 * | Dec 20, 1979 | Jul 7, 1981 | General Electric Company | Charge transfer device phased array beamsteering and multibeam beamformer |
US4307613 * | Jun 14, 1979 | Dec 29, 1981 | University Of Connecticut | Electronically focused ultrasonic transmitter |
US4313184 * | Feb 5, 1980 | Jan 26, 1982 | Plessey Handel Und Investments Ag | Sonar systems |
US4336607 * | Dec 10, 1980 | Jun 22, 1982 | The United States Of America As Represented By The Secretary Of The Navy | Beamformer having random access memory delay |
US4401957 * | Feb 2, 1981 | Aug 30, 1983 | Siemens Gammasonics, Inc. | Permutating analog shift register variable delay system |
US4464726 * | Sep 8, 1981 | Aug 7, 1984 | Massachusetts Institute Of Technology | Charge domain parallel processing network |
US4544927 * | Nov 4, 1982 | Oct 1, 1985 | Sperry Corporation | Wideband beamformer |
US4616231 * | Mar 26, 1984 | Oct 7, 1986 | Hughes Aircraft Company | Narrow-band beam steering system |
US4947176 * | Jun 8, 1989 | Aug 7, 1990 | Mitsubishi Denki Kabushiki Kaisha | Multiple-beam antenna system |
US5014250 * | Apr 13, 1990 | May 7, 1991 | Hollandse Signaalapparaten B. V. | Acoustic detection device |
US5029144 * | Jul 5, 1990 | Jul 2, 1991 | The United States Of America As Represented By The Secretary Of The Navy | Synthetic aperture active underwater imaging system |
US5030953 * | Jul 11, 1990 | Jul 9, 1991 | Massachusetts Institute Of Technology | Charge domain block matching processor |
US5031625 * | Jan 30, 1989 | Jul 16, 1991 | Yokogawa Medical Systems, Limited | Received ultrasonic phase matching circuit |
US5089983 * | Feb 2, 1990 | Feb 18, 1992 | Massachusetts Institute Of Technology | Charge domain vector-matrix product processing system |
US5126962 * | Jul 11, 1990 | Jun 30, 1992 | Massachusetts Institute Of Technology | Discrete cosine transform processing system |
US5200755 * | Jul 16, 1992 | Apr 6, 1993 | Mitsubishi Denki Kabushiki Kaisha | Bistatic radar system |
US5228007 * | Mar 20, 1992 | Jul 13, 1993 | Fujitsu Limited | Ultrasonic beam forming system |
US5276452 * | Jun 24, 1992 | Jan 4, 1994 | Raytheon Company | Scan compensation for array antenna on a curved surface |
US5309409 * | Oct 28, 1982 | May 3, 1994 | Westinghouse Electric Corp. | Target detection system |
US5343211 * | Jan 22, 1991 | Aug 30, 1994 | General Electric Co. | Phased array antenna with wide null |
US5363108 * | Mar 5, 1992 | Nov 8, 1994 | Charles A. Phillips | Time domain radio transmission system |
US5386830 * | Oct 25, 1993 | Feb 7, 1995 | Advanced Technology Laboratories, Inc. | Ultrasonic pulsed doppler flow measurement system with two dimensional autocorrelation processing |
US5517537 * | Aug 18, 1994 | May 14, 1996 | General Electric Company | Integrated acoustic leak detection beamforming system |
US5533383 * | Aug 18, 1994 | Jul 9, 1996 | General Electric Company | Integrated acoustic leak detection processing system |
US5535150 * | Feb 10, 1995 | Jul 9, 1996 | Massachusetts Institute Of Technology | Single chip adaptive filter utilizing updatable weighting techniques |
US5555200 * | Jun 1, 1995 | Sep 10, 1996 | Massachusetts Institute Of Technology | Charge domain bit-serial multiplying digital-analog converter |
US5555534 * | May 2, 1995 | Sep 10, 1996 | Acuson Corporation | Method and apparatus for doppler receive beamformer system |
US5590658 * | Jun 29, 1995 | Jan 7, 1997 | Teratech Corporation | Portable ultrasound imaging system |
US5623930 * | May 2, 1995 | Apr 29, 1997 | Acuson Corporation | Ultrasound system for flow measurement |
US5685308 * | May 2, 1995 | Nov 11, 1997 | Acuson Corporation | Method and apparatus for receive beamformer system |
US5822276 * | May 2, 1997 | Oct 13, 1998 | Arete Engineering Technologies Corporation | Broadband sonar method and apparatus for use with conventional sonar sensor arrays |
US5904652 * | Dec 24, 1996 | May 18, 1999 | Teratech Corporation | Ultrasound scan conversion with spatial dithering |
US5957846 * | Nov 17, 1997 | Sep 28, 1999 | Teratech Corporation | Portable ultrasound imaging system |
US6111816 * | Nov 6, 1997 | Aug 29, 2000 | Teratech Corporation | Multi-dimensional beamforming device |
US6292433 * | Jul 30, 1999 | Sep 18, 2001 | Teratech Corporation | Multi-dimensional beamforming device |
US6671227 * | Aug 2, 2001 | Dec 30, 2003 | Teratech Corporation | Multidimensional beamforming device |
Citing Patent | Filing date | Publication date | Applicant | Title |
---|---|---|---|---|
US7473226 * | Jun 9, 2004 | Jan 6, 2009 | Hitachi Medical Corporation | Ultrasonographic device and ultrasonographic method |
US7474262 * | Jul 1, 2005 | Jan 6, 2009 | Delphi Technologies, Inc. | Digital beamforming for an electronically scanned radar system |
US7576682 * | Mar 14, 2006 | Aug 18, 2009 | Lockheed Martin Corporation | Method and system for radar target detection and angle estimation in the presence of jamming |
US7639171 | Feb 28, 2008 | Dec 29, 2009 | Delphi Technologies, Inc. | Radar system and method of digital beamforming |
US8079956 * | Jan 2, 2009 | Dec 20, 2011 | Hitachi Medical Corporation | Ultrasonographic device and ultrasonographic method |
US8221324 | May 4, 2007 | Jul 17, 2012 | Worcester Polytechnic Institute | Reconfigurable wireless ultrasound diagnostic system |
US8736484 * | Aug 11, 2010 | May 27, 2014 | Lockheed Martin Corporation | Enhanced-resolution phased array radar |
US8840558 | Jun 4, 2009 | Sep 23, 2014 | Starkey Laboratories, Inc. | Method and apparatus for mathematically characterizing ear canal geometry |
US20060173334 * | Jun 9, 2004 | Aug 3, 2006 | Takashi Azuma | Ultrasonographic device and ultrasonographic method |
US20100249598 * | Mar 25, 2009 | Sep 30, 2010 | General Electric Company | Ultrasound probe with replaceable head portion |
US20100312533 * | Jun 4, 2010 | Dec 9, 2010 | Starkey Laboratories, Inc. | Method and apparatus for mathematically characterizing ear canal geometry |
US20120038504 * | Aug 11, 2010 | Feb 16, 2012 | Lockheed Martin Corporation | Enhanced-resolution phased array radar |
US20130165777 * | Dec 27, 2012 | Jun 27, 2013 | Samsung Medison Co., Ltd. | Ultrasound system and method for detecting vector information using transmission delays |
CN102508230A * | Oct 20, 2011 | Jun 20, 2012 | 哈尔滨工程大学 | Implementation method for delay of image sonar and FPGA (field programmable gate array) of phase shift beam forming |
WO2014194290A1 * | May 30, 2014 | Dec 4, 2014 | eagleyemed, Inc. | Speckle and noise reduction in ultrasound images |
U.S. Classification | 367/138 |
International Classification | G01S7/523, H01Q25/04, G10K11/34, H01Q25/00, H01Q3/26 |
Cooperative Classification | G01S7/52095, H01Q3/26, H01Q3/2682, G01S15/8927, G01S7/52085, G10K11/346, H01Q25/00 |
European Classification | G01S7/52S14F, G01S15/89D1C5, G01S7/52S14, H01Q25/00, G10K11/34C4, H01Q3/26, H01Q3/26T |