US 20020044014 A1 Abstract A wideband predistortion system compensates for a nonlinear amplifier's frequency and time dependent AM-AM and AM-PM distortion characteristics. The system comprises a data structure in which each element stores a set of compensation parameters (preferably including FIR filter coefficients) for predistorting the wideband input transmission signal. The parameter sets are preferably indexed within the data structure according to multiple signal characteristics, such as instantaneous amplitude and integrated signal envelope, each of which corresponds to a respective dimension of the data structure. To predistort the input transmission signal, an addressing circuit digitally generates a set of data structure indices from the input transmission signal, and the indexed set of compensation parameters is loaded into a compensation circuit which digitally predistorts the input transmission signal. This process of loading new compensation parameters into the compensation circuit is preferably repeated every sample instant, so that the predistortion function varies from sample-to-sample. The sets of compensation parameters are generated periodically and written to the data structure by an adaptive processing component that performs a non-real-time analysis of amplifier input and output signals. The adaptive processing component also implements various system identification processes for measuring the characteristics of the power amplifier and generating initial sets of filter coefficients. In an antenna array embodiment, a single adaptive processing component generates the compensation parameters sets for each of multiple amplification chains on a time-shared basis. In an embodiment in which the amplification chain includes multiple nonlinear amplifiers that can be individually controlled (e.g., turned ON and OFF) to conserve power, the data structure separately stores compensation parameter sets for each operating state of the amplification chain.
Claims(77) 1. A method of generating an initial set of compensation parameters for use within a compensation circuit that predistorts an input transmission signal to a wideband amplifier to compensate for nonlinearities in an amplification process, the method comprising:
applying stimulation signals to the amplifier while recording observation data that represents a resulting output of the amplifier; evaluating the observation data to measure selected characteristics of the amplifier; constructing a non-linear model of the amplifier which incorporates the selected characteristics; adaptively adjusting the amplifier model to improve an accuracy of the model; and using the adjusted amplifier model to generate the initial set of compensation parameters. 2. The method as in 3. The method as in 4. The method as in 5. The method as in 6. The method as in 7. The method as in 8. The method as in 9. The method as in 10. The method as in 11. The method as in forming a vector of frequency domain gain and phase responses associated with a selected one of the plurality of amplitude levels; and computing an inverse Fourier Transform of the vector to generate a set of FIR filter coeficients. 12. The method as in 13. The method as in 14. The method as in 15. The method as in 16. The method as in 17. The method as in 18. The method as in 19. The method as in (a) stimulating upconversion and amplification circuitry of the amplifier with a stimulating waveform while collecting observation data; and (b) increasing a power level of the stimulating waveform incrementally and repeating (a) until a 1 dB compression point and saturated power operating point of the amplifier have been reached. 20. The method as in 21. The method as in 22. The method as in (a) applying an input signal to the model and to the amplifier while monitoring a difference between respective outputs thereof, and adaptively adjusting parameters of the model until an error floor in the difference is substantially reached; and (b) once the error floor has been substantially reached, increasing a complexity level of the model and then repeating (a). 23. The method as in 24. The method as in 25. The method as in 26. The method as in 27. The method as in coupling the adjusted amplifier model to an output of a compensation module that corresponds to the compensation circuit; applying a signal to the compensation module, and monitoring a resulting difference between said signal and an output of the adjusted amplifier model; and adaptively adjusting compensation parameters of the compensation module to reduce said difference. 28. The method as in 29. The method as in 30. A method for modeling a wideband amplifier, comprising:
(a) applying stimulation signals to the amplifier to measure characteristics of the amplifier; (b) using the characteristics measured in (a) to generate a non-linear model of the amplifier; (c) applying an input signal to the model and to the amplifier while monitoring a difference between respective outputs thereof, and adaptively adjusting parameters of the model until an error floor in the difference is substantially reached; and (d) increasing a level of complexity of the model and then repeating (c). 31. The method as in 32. The method as in 33. The method as in 34. The method as in 35. The method as in 36. The method as in 37. The method as in 38. The method as in 39. The method as in 40. The method as in 41. The method as in 42. The method as in coupling the amplifier model to an output of a pre-amplification compensation module; applying a signal to the pre-amplification compensation module, and monitoring a resulting difference between said signal and an output of the amplifier model; and adaptively adjusting compensation parameters of the pre-amplification compensation module to reduce said difference, to thereby generate estimates of compensation parameters to be used during transmissions. 43. The method as in 44. The method as in 45. The method as in 46. The method as in reducing the model of the amplifier to a first order, single kernel model in which sets of filter coefficients are stored in a one-dimensional data structure; and computing an initial set of the compensation parameters directly from the first order, single kernel model. 47. A method of generating a model of a wideband amplifier, comprising:
applying narrowband stimulation signals to the amplifier over a plurality of amplitude levels and a plurality of center frequencies, and using resulting amplifier output data to compute amplitude-dependent and frequency-dependent variations in at least the gain and phase rotation introduced by the amplifier; applying a wideband stimulation signal to the amplifier, and using resulting output data to compute bulk estimates of at least the gain, phase rotation and delay introduced by the amplifier; generating a data structure which contains multiple sets of finite impulse response (FIR) filter coefficients indexed by signal amplitude level, wherein the FIR filter coefficients incorporate the amplitude-dependent and frequency-dependent variations in the gain and phase rotation; and cascading a bulk stage that incorporates the bulk estimates of the gain, phase rotation and delay with a filter stage that filters an input signal using the FIR filter coefficients stored in the data structure, wherein the filter stage selects sets of FIR filter coefficients from the data structure for use based at least upon a current amplitude of the input signal. 48. The method as in 49. The method as in 50. The method as in 51. The method as in 52. The method as in 53. The method as in forming a vector of frequency domain gain and phase responses associated with a selected amplitude level; and computing an inverse Fourier Transform of the vector to generate a set of FIR filter coeficients. 54. The method as in 55. The method as in 56. The method as in 57. The method as in 58. The method as in (a) applying an input signal to the model and to the amplifier while monitoring a difference between respective outputs thereof, and adaptively adjusting parameters of the model until an error floor in the difference is substantially reached; and (b) increasing a complexity level of the model and then repeating (a). 59. The method as in 60. The method as in 61. A method of modeling a frequency response of a wideband amplifier, comprising:
(a) stimulating the amplifier with a narrowband signal over substantially an entire input amplitude range of the amplifier while recording observation data that represents a resulting output of the amplifier; (b) repeating (a) for each of a plurality of center frequencies of the narrowband signal such that the amplifier is stimulated over substantially an entire operating bandwidth; and (c) for each of a plurality of discrete amplitude levels, using the observation data recorded in (a) and (b) to compute gain and phase responses of the amplifier for at least some of the plurality of center frequencies. 62. The method as in 63. The method as in 64. The method as in 65. The method as in 66. The method as in 67. A model of a non-linear wideband amplifier, the model comprising:
a bulk stage that applies at least bulk gain, phase and delay adjustments to an input signal; and a filter stage that further adjusts the input signal to account for at least frequency-dependent and amplitude-dependent variations in the gain and phase introduced by the amplifier, the filter stage comprising a data structure that supplies finite impulse response (FIR) filter coeficients to an FIR filter based at least upon a current amplitude or power of the input signal. 68. The model as in 69. The model as in 70. The model as in 71. The model as in 72. The model as in 73. The model as in 74. The model as in 75. A method of generating an initial set of compensation parameters, including filter coefficients, for use within a digital compensation circuit that predistorts an input signal to a wideband amplifier, the method comprising:
generating an initial model of the wideband amplifier, wherein the initial model comprises a filter structure for which sets of coefficients are supplied by a multi-demensional data structure, wherein each dimension of the data structure corresponds to a different respective input signal characteristic and each storage element of the data structure stores a set of filter coefficients; reducing the initial model of the amplifier to a first order, single kernel model in which sets of filter coefficients are stored in a one-dimensional data structure; and computing an initial set of the compensation parameters directly from the first order, single kernel model. 76. The method as in (a) applying stimulation signals to the amplifier to measure characteristics of the amplifier; (b) using the characteristics measured in (a) to generate a non-linear model of the amplifier; (c) applying an input signal to the non-linear model and to the amplifier while monitoring a difference between respective outputs thereof, and adaptively adjusting parameters of the model until an error floor in the difference is substantially reached; and (d) increasing a level of complexity of the non-linear model and then repeating (c) until a desired level of accuracy is reached. 77. The method as in Description [0001] This application claims the benefit of U.S. Provisional Appl. No. 60/143,570, filed Jul. 13, 1999, the disclosure of which is hereby incorporated by reference. [0002] This invention relates to power amplifiers, and more particularly relates to predistortion circuits and methods for compensating for nonlinearities within the amplification process. [0003] Radio frequency (RF) power amplifiers are widely used to transmit signals in communications systems. Typically a signal to be transmitted is concentrated around a particular carrier frequency occupying a defined channel. Information is sent in the form of modulation of amplitude, phase and/or frequency, causing the information to be represented by energy spread over a band of frequencies around the carrier frequency. In many schemes the carrier itself is not sent since it is not essential to the communication of the information. [0004] A signal which varies in amplitude will suffer distortion during amplification if the amplifier does not exhibit a linear amplitude characteristic. Perfect linearity over a wide range of amplitude is difficult to realize in practice. The signal will also suffer distortion if the phase shift introduced by the amplifier (1) varies with the signal's amplitude, or (2) is not linear over the range of frequencies present in the signal. The distortion introduced typically includes intermodulation of the components of the input signal. In addition to appearing within the bandwidth of the signal, such distortion products typically extend outside the bandwidth originally occupied by the signal, potentially causing interference in adjacent channels. Although filtering can be used to remove the unwanted out of band distortion, filtering is not always practical, especially if the amplifier is required to operate on several different frequencies. [0005] A typical amplifier becomes significantly nonlinear at a small fraction of its maximum output capacity. In order to maintain linearity, the amplifier is therefore operated at an input and output amplitude which is low enough that the signals to be amplified are in a part of the transfer characteristic which is substantially linear. In this mode of operation, known as “backed off”, the amplifier has a low supplied power to transmitted power conversion efficiency. For example, a “Class A” amplifier operating in this mode may have an efficiency of only 1%. In addition to wasting power, amplifiers operated in a backed off mode tend to be large and expensive. [0006] One method for compensating for an amplifier's nonlinearities is known as predistortion. With traditional predistortion, an inverse model of the amplifier's nonlinear transfer characteristic is formed and is then applied to the low level signal at the input of the amplifier. The input signal is thus predistorted in a manner that is equal to and opposite from the distortion introduced during amplification, so that the amplified signal appears undistorted. To account for variations in the amplifier's transfer characteristic, the inverse model is updated based on a real-time observation of the amplifier's input and output signals. [0007] One problem with existing predistortion methods is that they are generally based on the assumption, known as the memoryless AM-AM and AM-PM assumption, that (a) the nonlinear response of the amplifier is independent of the instantaneous frequency of the stimulating waveform, and (b) the nonlinear response of the amplifier is independent of previous amplifier input stimulus. Unfortunately, (a) and (b) generally do not hold true for wideband applications. As a result, existing predistortion techniques do not produce satisfactory results within wideband systems. [0008] Another problem with existing predistortion techniques is that they fail to accurately take into account memory effects (effects of past stimulus) within the AM-AM and AM-PM distortion characteristic. Such memory effects are often caused by fluctuations in amplifier transistor die temperatures which occur as the result of variations in the amplitude of the signal being amplified. Failure to accurately predict and account for such memory effects can produce poor results. [0009] The present invention addresses the above and other problems with existing predistortion schemes. [0010] The present invention provides a wideband predistortion system and associated methods for compensating for non-linear characteristics of a power amplifier, including the amplifier's frequency and time dependent AM-AM and AM-PM distortion characteristics. The system preferably comprises a data structure in which each element stores a set of compensation parameters (preferably including FIR filter coefficients) for predistorting the wideband input signal. The parameter sets are preferably indexed within the data structure according to multiple signal characteristics, such as instantaneous amplitude and integrated signal envelope, each of which corresponds to a respective dimension of the data structure. [0011] To predistort the input transmission signal, an addressing circuit digitally generates a set of data structure indices by measuring the input transmission signal characteristics by which the data structure is indexed. In one embodiment, a data structure index is also generated from the output of a transistor die temperature sensor. On each sample instant, the indexed set of compensation parameters is loaded into a compensation circuit that predistorts the input transmission signal. The compensation circuit, which may be implemented in application-specific circuitry, preferably includes a finite impulse response (FIR) filter, and may also include an IQ modulator correction circuit. [0012] The sets of compensation parameters are generated and written to the data structure by an adaptive processing component, which may be implemented using a programmed microprocessor or digital signal processor. The adaptive processing component generates the compensation parameter sets during regular amplifier operation by performing a non-real-time analysis of captured amplifier input and output signals. The adaptive processing component also preferably implements a state machine for controlling the overall operation of the amplifier system. [0013] The adaptive processing component also implements a system identification process for measuring the characteristics of the power amplifier and generating initial sets of compensation parameters. As part of this process, stimulation signals are applied to the amplifier to measure various characteristics of the amplifier, including amplitude-dependent and frequency-dependent characteristics. The measured characteristics are used to generate a non-linear model of the amplifier. An input signal is then applied to both the amplifier and its model while monitoring a difference between the respective outputs, and the parameters of the model are adaptively adjusted until an error floor is reached. The level of complexity of the model is then increased, and the adaptive process repeated, until a desired level of model accuracy is reached. The model is then used to generate initial sets of compensation parameters—preferably using a direct inversion and/or adaptive process. [0014] In one specific embodiment of, and application for, the invention, the predistortion architecture is used to compensate for nonlinearities in each amplification chain of an antenna array system. A compensation circuit of the type described above is provided along each amplification chain. However, rather than providing separate adaptive processing components for each amplification chain, a single adaptive processing component is used on a time-shared basis to generate and update the compensation parameters for all of the amplification chains. [0015] In another specific embodiment of, and application for, the invention, the amplification chain includes a power splitter that feeds multiple nonlinear amplifiers. The nonlinear amplifiers are individually controlled (e.g., turned ON and OFF) to conserve power, such as during low traffic conditions. The amplification chain thus has multiple operating points, each of which corresponds to a particular combination of amplifier states. In this embodiment, the data structure is expanded, such as by adding an additional dimension, to store sets of compensation parameters for each operating point of the amplification chain. [0016] Additional inventive features are set forth below. [0017] Several preferred embodiments of the invention will now be described with reference to the drawings, in which: [0018]FIG. 1 illustrates an amplifier system which implements digital predistortion in accordance with a preferred embodiment of the invention; [0019]FIG. 2 illustrates the flow of information within the amplifier system of FIG. 1; [0020]FIG. 3 illustrates details of the Digital Compensation Signal Processor (DCSP) of FIG. 1 according to one embodiment of the invention; [0021]FIGS. 4A and 4B illustrate example digital circuits that may be used to implement the predistortion filter and IQ modulator correction circuit (FIG. 3) of the DCSP; [0022]FIG. 5 illustrates an example digital circuit that may be used to implement the integration filter (FIG. 3) of the DCSP; [0023]FIG. 6, which consists of FIGS. 6A and 6B, illustrates another circuit that may be used to implement the integration filter; [0024]FIGS. 7 and 8 illustrate respective alternative designs for the DCSP of FIG. 1; [0025]FIG. 9 illustrates an example state machine that may be implemented by the Adaptive Control Processing and Compensation Estimator (ACPCE) in FIG. 1 to control the operation of the amplifier system; [0026]FIG. 10 illustrates a state diagram for the system identification (SID) phase of the ACPCE's operation; [0027]FIG. 11 illustrates power ramping of a measurement signal used for system identification. [0028]FIG. 12 illustrates gain and power response curves for typical RF amplifiers; [0029]FIG. 13 illustrates a process for implementing state 1 (measurement of circuit characteristics) in FIG. 10; [0030]FIG. 14 summarizes the initial identification problem solved by the ACPCE's system identification algorithms; [0031] FIGS. [0032]FIG. 19 illustrates a typical amplifier's nonlinear frequency domain AM-AM surface; [0033]FIG. 20 illustrates a process for computing the FIR filter coefficients used by the power amplifier model; [0034]FIG. 21 illustrates a predicted waveform error magnitude trajectory during an iterative model adaptation process; [0035]FIG. 22 is a flow diagram of the adaptation process depicted in FIG. 21; [0036]FIG. 23, which consists of FIGS. [0037]FIG. 24A illustrates a process for initially computing DCSP compensation parameters, and corresponds to state 3 in FIG. 10; [0038]FIG. 24B illustrates a desired system response sought through adjustment of the DCSP parameters; [0039]FIG. 25A illustrates a model inversion process; [0040]FIG. 25B illustrates a cascade of an inverse forward model and a simplified amplifier forward model; [0041]FIG. 26, which consists of FIGS. [0042]FIG. 27 illustrates the propagation of the computed DCSP parameters into the multi-dimensional data structure of the DCSP; [0043]FIG. 28, which consists of FIGS. [0044]FIG. 29 illustrates the general process by which the ACPCE updates the DCSP's compensation parameters during transmission events; [0045]FIG. 31A illustrates how the amplifier system of FIG. 1, and particularly the predistortion units, may be implemented within hardware; [0046]FIG. 31B illustrates a hardware implementation that may be used if a digital baseband data source is not available; [0047]FIG. 32 illustrates an embodiment in which the individual nonlinear amplifiers are separately controlled; [0048]FIG. 33, which consists of FIGS. 33A and 33B, illustrates an embodiment in which input signals are predistorted along each amplification path of an antenna array system using a single ACPCE; [0049]FIG. 34 illustrates an architecture for controlling multiple independent amplifiers for hot swap redundant applications; [0050]FIG. 35 illustrates an embodiment which uses digital pre-conditioning and compression of the input signal; [0051]FIG. 36, which consists of FIG. 36( [0052]FIG. 37 illustrates a hardware implementation of the digital pre-conditioning and compression block in FIG. 35; [0053]FIG. 38 illustrates an alternative implementation of the digital pre-conditioning and compression block in FIG. 35; [0054]FIG. 39 illustrates a composite implementation of the digital pre-conditioning/compression and DCSP blocks of FIG. 35; [0055]FIG. 40 illustrates a process flow with no signal pre-conditioning; [0056]FIGS. 41 and 42 illustrate process flows with signal preconditioning; [0057]FIGS. 43 and 44 illustrate respective DCSP circuits for updating the multi-dimensional data structure; [0058]FIG. 45 illustrates a DCSP augmented with event-driven data capture circuitry; [0059]FIG. 46 illustrates an embodiment in which the amplifier's transistor die temperature is measured and provided to the ACPCE; [0060]FIG. 47 illustrates an embodiment in which compensation parameters are stored and provided to the DCSP for each carrier frequency within a hopping sequence; [0061]FIGS. 48 and 49 illustrate embodiments in which the DCSP's filtering function is performed by a quasi static filter cascaded with a dynamic filter; [0062]FIG. 50 illustrates an embodiment in which compensation parameters are generated on-the-fly, rather than being retrieved from a data structure; and [0063]FIG. 51 illustrates an embodiment which uses fast automatic gain control. [0064] Throughout the drawings, like reference numbers are used to indicate components that are similar or identical in function. [0065] A wideband amplifier system which implements a predistortion scheme according to the invention will now be described with reference to the drawings. Several variations, implementations, and enhancements of the basic design, and example applications for the design, will also be described. It should be understood that these various designs represent preferred embodiments of the invention, and as such, are not intended to limit the scope of the invention. The invention is defined only by the appended claims. [0066] For convenience, the description is arranged within the following sections and subsections:
[0067] 1. Overview [0068]FIG. 1 illustrates an amplifier system [0069] The analog circuitry provided along the path between the DAC [0070] In FIG. 1 and throughout the description of the various embodiments, it may be assumed that the input transmission signal, Vm(t), is a wideband signal. More specifically, it may be assumed that Vm(t) has at least one, and preferably all, of the following characteristics: (a) the signal stimulates the amplifier system [0071] The basic objective of the wideband predistorter design is to digitally compensate the wideband input signal, Vm(t), such that after RF upconversion and amplification by a nonlinear amplifier [0072] The Adaptive Control Processing and Compensation Estimator (ACPCE) [0073] The Adaptive Control Processing and Compensation Estimator (ACPCE) [0074] In a preferred embodiment, the DCSP [0075] An important aspect of the invention involves features of the DCSP [0076] 2. General Operation of Predistortion System [0077]FIG. 2 illustrates the flow of information within the amplifier system [0078] As further illustrated by FIG. 2, the ACPCE [0079] The ACPCE [0080] 2.1. Operation of the Open Loop Real Time Forward Path [0081] In practice, a complex baseband signal Vm(t) that is intended to be amplified is applied to the input of the DCSP [0082] The generalized DAC block [0083] The RF passband signal Vd [0084] 2.2. Operation of the Real Time Feedback and Observation Paths [0085] A sample of the energy fed to the amplifier load, Vf [0086] The ACPCE [0087] The new parameters calculated by the ACPCE [0088] 3. Operation of Individual System Components [0089] The individual components or blocks of the amplifier system [0090] 3.1. Digital Compensation Signal Processing (DCSP) Block [0091] This section details the operation of the DCSP [0092] 3.1.1. DCSP Construction [0093]FIG. 3 illustrates the construction and operation of the digital wideband predistorter embedded within the DCSP [0094] 3.1.2. DCSP Functional Units and Operation [0095]FIG. 3 illustrates the various functional units of the DCSP [0096] An important aspect of the design is the use of a multi-dimensional look up table [0097] Although the use of a multi-dimensional data structure as set forth herein provides significant benefits, a one-dimensional data structure may be used, for example, in applications for which the input signal does not vary substantially in average power. Specifically, because the average power remains substantially constant, the sets of compensation parameters associated with other average power levels need not be generated or stored, permitting the elimination of one dimension of the table. In such embodiments, each element of the table again stores a complete set of compensation parameters, but the table is now indexed (accessed) based solely on an instantaneous attribute of the input signal, such as the signal's magnitude. [0098] The upper path in FIG. 3 is responsible for computing the digital output signal Vd [0099] The lower data path illustrated in FIG. 3 is responsible for selecting the set of correction coefficients to be loaded, on a sample-by-sample basis, into the predistortion filter [0100] To compute the lookup table addressing indices, the magnitude (or power) of the input signal Vm(t) is initially computed in block [0101] The magnitude signal generated by block [0102] As depicted in FIG. 3, the column index to the look-up table [0103] The overall size of the look up table [0104] As depicted by the vector X [0105]FIGS. 4A and 4B illustrate example digital circuits that may be used to implement the predistortion filter [0106] 3.1.2.1. Integration Filter Construction [0107]FIG. 5 illustrates the construction of the integration filter [0108] To ensure that the integration filter [0109]FIG. 6 illustrates a nonlinear integration filter kernel that may be used to overcome this problem when the wideband predistortion design is used with transistor technologies that exhibit nonlinear changes as a function of temperature. The nonlinear integration filter [0110] Equation 1 provides a mathematical definition of a nonlinear integration filter structure which may be used. The filter may be envisioned as a series of Taylor series expansions. For each time lag the series expansion is independent, and so the structure can practically compute any nonlinear thermal characteristic function that may be exhibited by the transistor die. A goal of the ACPCE algorithms is to adjust the tap coefficients and delay between the taps such that the integration filter model of the amplifier provides an accurate representation of the amplifier's true characteristic.
[0111] Although the integration filter's tap coefficients and tap delay values do not change on a sample by sample basis, they can be adjusted by the ACPCE if more accurate or appropriate values have been computed. One method for changing these values is to download a sequence of changes over a period of time such that a large step is broken into a sequence of smaller steps. As mentioned above, classic numerical interpolation techniques can be used to provide smooth transitions between steps so that disturbance errors are reduced. [0112] 3.1.2.2. Extended DCSP Compensation Architectures [0113] High power nonlinear amplifiers typically exhibit second and third order characteristics that vary as a function of the applied input signal waveform. In particular, second and even order distortion mechanisms can, in sufficiently high power amplifiers, cause the bias voltages to become modulated with the input modulation signal information bearing envelope. Consequently, in these scenarios, an increase in DCSP compensation circuit complexity is desirable to combat the AM-AM and AM-PM that becomes dependent upon the envelope's instantaneous characteristic. [0114]FIG. 7 illustrates the expansion of the DCSP circuit [0115]FIG. 8 illustrates an extension to the DCSP [0116] The architecture illustrated in FIG. 8 allows the DCSP to generate a full Volterra non-linear kernel in a piece-wise linear manner. That is, the DCSP can provide correction coefficients from the multi-dimensional data structure [0117] 3.1.3. DCSP Theory of Operation [0118] The theory of operation of the DCSP implementation shown in FIG. 3 will now be described in greater detail. [0119] As discussed above, the amplification chain [0120] If the amplifier assembly has been subjected to a high power operating point for a period of time, the transistor die will be significantly hotter and will hence exhibit a different AM-AM and AM-PM characteristic than if the input stimulus had caused the amplifier to operate at a lower power. A low die temperature will cause the nonlinearity exhibited by the amplifier to change, and as a consequence, a different set of correction coefficients will be used. [0121] Since the two-dimensional look up table [0122] In practice, the length of the FIR filter [0123] 3.2. Adaptive Control Processing and Compensation Estimator [0124] The Adaptive Control Processing and Compensation Estimator (ACPCE) [0125] 3.2.1. ACPCE Operation [0126]FIG. 9 is a state machine diagram that illustrates an example control process that may be implemented by the ACPCE to control the overall operation of the amplifier system. The states illustrated in FIG. 9 are described in detail in the following subsections. Many of the illustrated states use numerical and signal processing algorithms that operate upon stored sample data sequences of the digital input signal, Vm(t), and of the downconverted and digitized amplifier output, Vf(t). To ensure clarity, these data processing algorithms are detailed separately in a later section and are only referred to briefly within the state machine description. [0127] It is assumed in the FIG. 9 embodiment that the power amplifier [0128] 3.2.1.1. State 1: Transmit Power Off [0129] In the TX POWER OFF STATE(1) the ACPCE ensures that the amplifier
[0130] 3.2.1.2. State 1 A: Transmit Power Up [0131] When in the TX POWER UP STATE(1A), the ACPCE ensures that no RF emission from the amplifier system
[0132] 3.2.1.3. State 1B: Transmit Power Down [0133] In the TX POWER DOWN STATE(1B), the ACPCE removes the amplifier bias and DC supply voltage in a controlled manner so that no RF emission from the amplifier system occurs. The ACPCE applies the following control logic when in this state:
[0134] 3.2.1.4. State 2: Calibration [0135] In the calibration state the ACPCE determines whether the stored compensation parameters are still valid. This state captures a large breadth of conditions that may include initial provisioning of a new power amplifier. While in this state the ACPCE also determines if a transmission power ramp is required or whether the signal Vm(t) has a power ramp already embedded within its structure. This may also be a user programmable option. The ACPCE applies the following control logic when in this state:
[0136] 3.2.1.5. State 3: Training and Acquisition [0137] In the TRAINING AND ACQUISITION STATE(3) the ACPCE examines the stored compensation parameters and the performance of the predistortion process by monitoring the recovered power amplifier samples and identifying the characteristics of the upconversion and amplification chain [0138] The ACPCE identifies the imperfections of the analog upconversion and amplification chain using several algorithms. These algorithms use one or more training sequences that may be used in conjunction with various estimation techniques to compute the initial estimates of the compensation parameters. Each algorithm has unique attributes that provide different advantages in different commercial environments. These algorithms are described throughout Section 3.3. [0139] The ACPCE applies the following control logic when in this state:
[0140] Step 1: stimulate the analog RF upconversion, amplification and power combining circuitry with one or more of the following test sequences [0141] a) transmit a narrowband bandlimited transmission sequence on the upconversion and amplifier chain. [0142] b) transmit a wideband bandlimited transmission sequence on the upconversion and amplifier chain. [0143] c) transmit a narrowband bandlimited white noise signal on the upconversion and amplifier chain. [0144] d) transmit a wideband bandlimited white noise signal on the upconversion and amplifier chain. [0145] e transmit a discrete or continuous frequency chirp sequence on the upconversion and amplifier chain. [0146] f transmit a discrete or continuous polyphase sequence constructed on the upconversion and amplifier chain. [0147] g transmit a sequence of random modulation sequence s(t) signal on the upconversion and amplifier chain. [0148] it is important to note that this stage may require the ACPCE to isolate the amplifier from an antenna and direct the generated RF energy to a dummy load to prevent undesirable power emission during training. [0149] Step 2: for each transmitted sequence the ACPCE shall collect a finite sequence of data samples of the transmitted signal Vm(t) (prior to digital signal compensation processing) while simultaneously collecting a concurrent finite sequence of data samples from the recovered downconverted power amplifier combining output circuit via the ADC circuits, Vf(t). [0150] Step 3: the ACPCE shall compute from the ensemble of received data samples estimates of all upconversion imperfections. This may be done by utilizing one or more of the following algorithms. [0151] a) correlation [0152] b) LMS system identification. [0153] c) RLS system identification. [0154] d) nonlinear Kalman filter system identification algorithms. [0155] e) any signal processing algorithm that is capable of system identification in non-linear signal processing, e.g. distortion analysis by wavelet multi signal resolution. These algorithms are discussed in Section 3.3. [0156] Step 4: compute estimates of the signal compensation parameters that are required to counteract the imperfections identified in the previous step(3). [0157] Step 5: upload compensation parameters to the Digital Signal Compensation Processing block via the parameter state vector X [0158] Step 6: for each transmitted sequence the ACPCE shall continue to collect a finite sequence of data samples of the transmitted signal Vm(t) (prior to digital signal compensation processing) while simultaneously collecting a concurrent finite sequence of data samples from the recovered downconverted power amplifier combining output circuit via the ADC circuits, Vf(t). [0159] Step 7: determine if the error between the desired transmitted sequence Vm(t) and the observed sequence Vf(t) is below an acceptable level. STEP 8: if the error is below an acceptable level then store update compensation parameters and proceed to step 9 else repeat steps 1-7. [0160] [0161] Step 9: if all channels have been calibrated then finish else repeat steps 1-8 for the next channel. The channels to be calibrated may be defined as a user option.
[0162] 3.2.1.6. State 4: Transmission Ramp Up [0163] In the TRANSMISSION RAMP UP STATE(4) the ACPCE provides a smooth bandlimited transition between the transmitted training sequence state and the start of the modulation signal. In practice, the ACPCE ensures that during the transition the gradients of the amplitude, phase and frequency trajectories are continuous and bandlimited. This is very similar to the ordinary problem of amplifier “clicks” known to those skilled in the art since the inception of telegraphic keying, morse code. However it is important to note that this effect is more pronounced in a wideband predistortion transmitter because the amplifier is running at full power and any step or disturbance in the modulation trajectory will cause distortion power spectra to be generated. [0164] Thus the ACPCE provides a smooth transition between the normal transmission state and the burst training state. As mentioned earlier this is readily achieved by interpolation in the amplitude, phase and frequency domains. [0165] The ACPCE applies the following control logic when in this state:
[0166] 3.2.1.7. State 7: Transmission Ramp Down [0167] Power ramp down can suffer identical spectral emissions problems to those incurred when an amplifier is ramped up in power. The algorithm used for power ramp up is also directly applicable to the power ramp down scenano. The ACPCE applies the following control logic when in this state:
[0168] 3.2.1.8. State 5: Track and Update [0169] State 5 represents the normal operational state of the amplifier system [0170] The ACPCE uses several algorithms to continually improve the accuracy of the compensation parameters during on-line operation. These algorithms employ the random transmit signal Vm(t) as a training sequence that may be used in conjunction with various estimation techniques to compute the updated estimates of the compensation parameters. Each algorithm has unique attributes that provide different advantages in different commercial environments. These algorithms are described in Section [0171] The ACPCE applies the following control logic when in this state:
[0172] Step 1: from the transmitted signal sequence, Vm(t), the ACPCE shall collect a finite sequence of data samples of the transmitted signal components Vm(t) (prior to digital signal compensation processing) while simultaneously collecting a concurrent finite sequence of data samples from the recovered downconverted power amplifier combining output circuit via the ADC circuits, Vf(t) (i.e., kVm(t)). [0173] Step 2: the ACPCE shall compute update estimates of the compensation parameters from the ensemble of received data samples. This may be done by utilizing one or more of the following algorithms: [0174] a) LMS system adaptation and gradient update algorithms. [0175] b) RLS system adaptation and gradient update algorithms. [0176] c) nonlinear Kalman filter system adaptation and gradient update algorithms. [0177] d) any signal processing algorithm that is capable of adaptation such that the updated compensation parameters are more accurate than the existing parameters. [0178] These algorithms are discussed in Section 3.3. [0179] Step 3: upload compensation parameters to the Digital Signal Compensation Processing block via the parameter state vector X [0180] Step 4: for each transmitted sequence the ACPCE shall continue to collect a finite sequence of data samples of the transmitted signal components Vm(t) (prior to digital signal compensation processing) while simultaneously collecting a concurrent finite sequence of data samples from the recovered downconverted power amplifier combining output circuit via the ADC circuits, Vf(t) (i.e., kVm(t)). [0181] Step 5: determine if the error between the desired transmitted sequence Vm(t) and the observed sequence Vf(t) (i.e., kVm(t)) is below an acceptable level. [0182] Step 6: if the error is below an acceptable level then store update compensation parameters and proceed to step 8 else repeat steps 1-5. [0183] Step 7: finish. [0184] ELSE the ACPCE shall remain in the TRACK AND UPDATE STATE(5) [0185] 3.2.1.9. State 6: Burst Idle Training [0186] The burst idle training state is preferably used only when the wideband amplifier system [0187] The ACPCE applies the following control logic when in this state:
[0188] 3.3. ACPCE System Identification (SID) algorithms [0189] As described above, the ACPCE [0190] The following sections detail the operations and algorithms used within each state of the SID operation according to one embodiment. [0191] 3.3.1. State 1: Algorithms, Measure Circuit Characteristics [0192] The feedback signal Vf(t) is a copy of the input signal, Vm(t), subjected to a variety of imperfections induced by the amplifier [0193] A difficulty faced by the ACPCE SID algorithms is that it is extremely easy to damage or even destroy the amplifier [0194] 3.3.1.1. Power Ramping Algorithm and Measurement Signal Structure [0195] 3.3.1.1.1. Overview [0196] To measure the characteristics of the amplification chain [0197] The measurement signal is applied to the amplification chain [0198] The saturated output power level and 1 dB compression point are identified by examining the relationship between the input power and output power and observed amplifier gain curves, the general form of which are shown in FIG. 12. The input power associated with the 1 dB compression point occurs at a point [0199] Once the maximum output power P [0200] 3.3.1.1.2. Algorithm Flow Chart of Measurement Process [0201] The previous section provided an overview of the measurement process. FIG. 13 illustrates a preferred embodiment of the process. The illustrated state diagram provides the internal processes of the first state illustrated in FIG. 10. [0202] 3.3.2. State 2: Algorithms, Construct Amplifier and Circuit Model [0203] 3.3.2.1. Overview [0204]FIG. 14 summarizes the initial identification problem that is solved by the ACPCE system identification algorithms. As described in Section 3.3.1, during SID, the ACPCE stimulates the wideband amplifier with a measurement waveform Vm(t) and records the associated output/observed signal Vf(t). However, observation of particular elements within the amplification chain [0205] Identification of a system model is a well defined control problem that has many solutions in the robotics and control field. LMS, RLS, and Kalman type algorithms are preferably used for this purpose, including extended LMS, momentum LMS, extended RLS, extended Kalman, and non-linear Kalman algorithms. [0206] FIGS. [0207] 3.3.2.2. Power Amplifier Models [0208] The structure of the models [0209] 3.3.2.2.1. First Order Extended Single Kernel Nonlinear Power Amplifier Model [0210] The first order extended single kernel model [0211] Since the amplifier model [0212] 3.3.2.2.2. Second Order Extended Single Kernel Nonlinear Power Amplifier Model [0213] The simple wideband power amplifier model described in the previous section is appropriate for low power devices which exhibit very weak nonlinearities in which memory effects are minor. As the power rating of an amplifier increases and exceeds 1 watt RF power capability, several second order effects become sufficiently pronounced that the error floor associated with the previous model is generally too high to be used. That is, the FIG. 15 model does not represent the behavioral characteristics of the amplification chain [0214]FIG. 16 illustrates the next level of wideband power amplifier model complexity. The model [0215] The differential of the modulation envelope is an important process to consider when extending the range of independent variables over which the power amplifier's nonlinearity is modeled. Laboratory tests have shown that although the envelope PDF of two particular waveforms may be identical, the nonlinearity exhibited by a particular amplifier may vary significantly if the level crossing rates differ, i.e. one signal exhibits a different bandwidth. The key process is that envelope rectification within the power amplifier may occur, producing a DC signal level that modulates the transistor bias voltages and hence alters, in a time variant fashion, the nonlinearity exhibited by the amplifier. The model illustrated in FIG. 16 provides a mechanism by which this process can be isolated from the bulk AM-AM and AM-PM characteristic of the amplifier. In an analogous manner to the previous model, the signal envelope's rate of change is computed by the differentiator [0216] 3.3.2.2.3. Third Order Extended Single Kernel Nonlinear Power Amplifier Model [0217] As the operating power level increases beyond 10 watts of RF power, the power amplifier's transistor junction die temperature fluctuates significantly as a function of the modulation envelope. Since the intrinsic AM-AM and AM-PM characteristic of the amplifier [0218] In an analogous manner to the previous model, the averaged power level (die temperature) is computed by the integrator circuit [0219] 3.3.2.2.4. Third Order Extended Multi Kernel Nonlinear Power Amplifier Mode [0220] The previous sections described models [0221] The model simply requires that each element of the data structure [0222] 3.3.2.3. Computation of Model Parameters [0223] 3.3.2.3.1. Overview [0224] Computation of the power amplifier's model parameters is a straightforward process consisting of three main steps. The first step computes the bulk gain, phase rotation and delay difference observed between the reference signal Vm(t) and the observed signal Vf(t). The resulting parameters are used to implement block [0225] The third step invokes an adaptation engine that fine tunes the filter coefficients to minimize the mean square error between the observed power amplifier output signal Vf(t) and the predicted output signal Vp(t). Adaptation continues until an error floor is reached, which in turn causes the ACPCE to determine if the error floor is sufficiently small that the power amplifier model is sufficiently accurate for the particular application. If the error floor is deemed to be at a satisfactory level, the model coefficients are stored and the amplifier modeling process is considered to be complete. Alternatively, the amplifier model complexity may be increased and the adaptation engine re-engaged to fine tune the parameters until a lower error floor is reached. Multiple iterations of this process may be used with ever increasing amplifier model complexity until a satisfactory error floor is reached. [0226] Each of these three steps will now be described in further detail. [0227] 3.3.2.3.2. Step 1: Bulk Gain, Phase and Delay Estimation [0228] The bulk delay between the input reference signal Vm(t) and the observed amplifier output, Vf(t), is readily determined by examining the cross correlation between these two signals. The bulk delay is estimated by selecting the delay τ that maximizes the magnitude of the cross correlation function defined in Equation 3. Once this time delay has been computed, the bulk phase rotation may be estimated by examining the argument of the cross correlation function for the delay that maximizes the cross correlation function, Equation 4. In typical discrete time processing scenarios, the time delay π may only be estimated in discrete sampling steps where the smallest time delay step is defined by the sampling/clock rate. In these scenarios, increased time delay estimation accuracy can be achieved by interpolating the observed waveform Vf(t) into secondary waveforms that are shifted by a fraction of a sampling period and subsequently recomputing Equation 3. Thus subsample time delay offsets in the correlation function may be examined. Consequently, the fractional time delay exhibiting the largest cross correlation magnitude defines the bulk time delay. Naturally, the bulk phase rotation can also be recomputed from the more accurate cross correlation value.
[0229] An alternative method for bulk delay estimation is to exploit cyclo-stationary properties of the input reference waveform Vm(t). This approach permits very accurate delay estimates to be computed but suffers from the inability to detect delay for certain classes of signal waveforms that do not exhibit band edge recovery properties, e.g. OQPSK. Correlation is generally preferred because it is reliable with signals that do not exploit cyclo-stationary properties. [0230] The bulk gain of the system is readily computed by utilizing Equation 5 when the system is stimulated with the wideband signal stimulus. In essence, Equation 5 computes the average input and output power levels and estimates the amplitude ratio which by definition is the bulk gain of the system.
[0231] Once these three parameters have been computed, the power amplifier's observed signal output Vf(t) can be scaled, rotated and delayed to match the original input signal Vm(t). The differences that are now exhibited between these two signals represent the difference incurred due to the wideband frequency dependent nonlinear characteristics of the power amplifier. Thus, the first level of the amplifier model consists of a simple complex gain (scale and phase rotation) and a bulk propagation delay filter. The associated parameters of this circuit are precisely the values that have been computed in the above equations. This ensures that the predicted amplifier model's output waveform, Vp(t), is synchronized in time and matched in amplitude and phase with the output of the external amplifiers signal Vf(t). [0232] 3.3.2.3.3. Step 2: Wideband FIR Response Estimation [0233] The second step in the development of the power amplifier model [0234] The set of resulting vectors are then stacked to form a 2-dimensional matrix with frequency and amplitude axes. Each element of the matrix stores the amplitude and gain response of the amplifier at a particular frequency and input amplitude level. FIG. 19 illustrates the gain response contained within the matrix. [0235] The individual FIR filters that describe the wideband frequency domain response of the amplifier [0236] The above procedure permits the least complex model illustrated in FIG. 15 to be estimated with potentially non-optimum taps values. Typically the inverse FFT will be computed assuming an excess of filter taps is permissible followed by truncation to a practical number after frequency to time domain conversion has occurred. However, because the model [0237] 3.3.2.3.4. Adaptation of Model Coefficients (for the purposes of increasing model accuracy) [0238] Once the initial parameters for the basic model [0239] The adaptation process continues until no further improvement in reduction of the error floor is observed. At this juncture the overall waveform vector error magnitude is examined to determine the accuracy of the model [0240] If the waveform vector error magnitude is sufficiently low, the model [0241] If a second order or greater model [0242] The error convergence floor is illustrated in FIG. 21. As depicted by FIG. 21, as the model complexity is increased, greater numbers of iterations may be required as the algorithm searches the parameter space while seeking the lowest error floor that can be converged upon. The uppermost curves [0243] After the parameter space has been searched, the FIR filter coefficients that correspond to the lowest error floor and circuit parameters are stored while the waveform vector error magnitude is computed to determine overall performance. If the performance is satisfactory, the model's final parameters are stored and the model estimation process is considered complete. Otherwise, the model complexity is increased and the process repeated. Naturally, this process can continually increase the complexity of the model [0244] Examination of FIG. 21 reveals that the waveform vector error magnitude rapidly falls as the model [0245] 3.3.2.3.4.1. Flow Diagram [0246] The flow diagram for the adaptation process algorithm described in the previous section is detailed in FIG. 22. [0247] 3.3.2.3.4.2. Basic LMS Adaptation Engine For Model Parameters [0248]FIG. 23 illustrates the aim of the adaptation process undertaken by the ACPCE as it adjusts the FIR filter coefficients of the wideband power amplifier model [0249] The complex coefficients of the FIR filter Verror=Vf(t)−Vp(t) Equation 9 [X [0250] For a three tap FIR filter example, Equation 10 would be represented as Equation 11.
[0251] This algorithm is a direct implementation of the standard LMS algorithm. It is important that the time index of the captured stimuli and observation waveforms be consistent, and that the delays in the compensation network be properly handled. This is a normal requirement that is known to those skilled in the use of this class of algorithms. The iteration explicitly defined within Equations 9 and 10 is repeatedly executed over the sampled wideband data set until the residual RMS value of the error voltage Verror(t) has finished converging i.e., reached an error floor. [0252] As mentioned above, the adaptation engine preferably optimizes the estimate of a particular FIR filter embedded within the power amplifier model's data matrix [0253] 3.3.2.3.4.3. Recursive Least Squares (direct form) also known as the Kalman Filter update [0254] Although the computational simplicity of the LMS algorithm is very attractive, its convergence speed can be prohibitively slow. This can be overcome by utilizing the RLS or Kalman filter algorithms. These algorithms exhibit significantly faster convergence rates but at the expense of increased computational complexity. These algorithms may be used within the wideband predistorter design as a direct replacement for the LMS algorithm and employed in an identical manner. These algorithms are widely defined and explained in the public domain literature, consequently the algorithm will simply be defined using the nomenclature of Proakis without further explanation.
_{true}(t)−S_{obs}(t) Equation 13
_{N}(t)=C_{N}(t−1)+P_{N}(t)Y^{*} _{N}(t)Verror(t) Equation 16
[0255] [0256] As indicated above, compensation circuitry may be included in the amplifier design to compensate for quadrature modulator and demodulator imperfections. These circuits may have internal interactions that cause the linear LMS and RLS algorithms to fail to correctly identify the true system parameters of the wideband amplifier. This occurs because the adjustment of the IQ modulator parameters will modify the gain and phase response of the circuit which is compensated for by the FIR filter coefficients. This interaction does not necessarily exhibit a linear characteristic, and as such, may cause the linear estimation algorithms to fail. This deficiency may be readily overcome by employing the extended Kalman filter algorithm which is designed to solve this class of problem. The ability of the extended Kalman filter to identify the system components despite the nonlinear interactions is achieved because the algorithm identifies the interactions between parameters as well as the parameters themselves. This naturally causes a significant increase in computational complexity. Consequently, this algorithm is preferably only used if it can be identified that nonlinear interactions between compensation parameters occur. [0257] The extended Kalman filter algorithm for nonlinear estimation environments is widely defined and explained in the public domain literature, consequently the algorithm is simply specified below using the nomenclature of Proakis without further explanation. S V C P Y [0258] 3.3.3. State 3: Compute DCSP Model's Compensation Parameters [0259] 3.3.3.1. Overview [0260]FIG. 24A illustrates states 3 and 4 of the SID process depicted in FIG. 10. A numerical model [0261] One of two alternative methods are preferably used to compute the DCSP's coefficients. A simple method, which may fail for very high power amplifier systems, is to select a DCSP structure that matches the complexity of the power amplifier model [0262] A more reliable approach, referred to herein as direct estimation, is to initially reduce the power amplifier model [0263] 3.3.3.2. Initial Direct Estimation of the DCSP Coefficients [0264] Direct estimation of the initial DCSP coefficients proceeds by reduction of the wideband power amplifier model [0265] Embodied within the FIR filter coefficients is the complex gain, i.e., gain and phase response, of the amplifier at each frequency over which the amplifier may operate. This information is directly accessible if the FFT of each FIR filter is undertaken by the ACPCE to form a matrix of amplifier complex gains that are indexed by input amplitude and frequency. The ACPCE then creates an additional complex gain matrix that represents the behavior of the DCSP coefficients and attempts to compute the required complex gains such that the overall cascade of the DCSP and power amplifier provides a linear system gain. The procedure according to a preferred embodiment is described in the following paragraphs. [0266]FIG. 25A illustrates the first detailed step that is taken in the direct estimation of the DCSP predistortion correction coefficients during the SID process. As mentioned above, the entire forward amplifier model's multi-dimensional data structure [0267] Once the system gain, k, has been set, the ACPCE selects a vector of AM-AM and AM-PM amplifier responses for a given operating frequency, as shown in FIG. 25A. For each frequency the ACPCE cascades a simple one dimensional data structure indexed by amplitude with the corresponding highly simplified amplifier model. [0268] The ACPCE then seeks DCSP coefficients for which this linear gain is achieved for all frequencies and instantaneous amplitudes exhibited by the input signal. FIG. 26 illustrates how the DCSP coefficients are adjusted. Since the response of the system is the cascade of two highly nonlinear systems, it is difficult or impossible to directly solve for the DCSP coefficients. Therefore, the DCSP's complex gain is instead iteratively adjusted until the system gain, k, is achieved. This process may use the simple RASCAL algorithm (see Andrew S. Wright and Willem G. Durtler “Experimental performance of an Adaptive Digital Linearized Power Amplifier”, IEEE Trans. Vehicular Technology, Vol 41, No. 4, November 1992) or simple application of a proportional control algorithm. [0269] This process is undertaken for each element of the DCSP's frequency domain response matrix so that the following conditions are satisfied. Vp(t)=Vm(t)F(|Vm(t)|)G(F(Vm(t))) Equation 23 k·Vm(t)−Vp(t)=0 Equation 24 [0270] This approach permits the ACPCE to compute a frequency and amplitude indexed matrix which contains the AM-AM and AM-PM response of the predistortion coefficients that will be utilized by the DCSP. FIG. 25B illustrates a descriptive/notional picture of the cascade of the predistortion inverse model [0271] Once the DCSP's frequency domain complex response matrix has been estimated, the ACPCE converts this back to the time domain by performing an inverse FFT upon a vector of complex gains indexed by a constant input amplitude and varying frequency extracted from the frequency domain complex response matrix. This creates a set of FIR filter coefficients that may be used in the DCSP [0272] 3.3.4. State 4: Algorithms;-Adaptively Seek DCSP Compensations Parameters [0273] 3.3.4.1. Overview [0274] The following sections outline the process by which the computed DCSP parameters are fine tuned using the numerical models of the DCSP and the amplification chain. [0275] 3.3.4.2. DCSP Parameter Expansion [0276] Once the FIR filter coefficients have been computed, they are propagated into the extended DCSP circuit's coefficient data structure [0277] 3.3.4.3. DCSP Parameter Adaptation [0278] As depicted by FIG. 24A, once the basic DCSP FIR filter coefficients have been propagated into the extended DCSP data structure [0279] 3.3.4.3.1. Basic LMS Adaptation Engine For Model Parameters. [0280]FIG. 28 illustrates the aim of the adaptation process undertaken by the ACPCE [0281] Equations 25 and 26 below represent a preferred LMS based algorithm for adjusting the complex filter coefficients, where: X(t) is the state vector of estimated FIR filter coefficients with the +/− nomenclature indicating update vector parameters and current vector parameters; V Verror=Vf(t)−Vp(t) Equation 25 [K [0282] This algorithm is a direct implementation of the standard LMS algorithm. For successful operation, the time index of the captured stimuli and observation waveforms should be consistent, and the delays in the compensation network should be properly handled. This is a normal requirement that is known to those skilled in the utilization of this class of algorithms. The iteration defined within Equations 25 and 26 is repeatedly executed over the sampled wideband data set until the residual RMS value of the error voltage Verror(t) has finished converging i.e., reached an error floor. [0283] In an identical manner to that incurred while seeking the FIR coefficients of the amplifier model [0284] 3.3.4.3.2. Recursive Least Squares (direct form) also known as the Kalman Filter update [0285] Although the computational simplicity of the LMS algorithm is very attractive, as noted above, its convergence speed can be prohibitively slow. This can again be overcome by utilizing the RLS or Kalman filter algorithms, which exhibit significantly faster convergence rates but at the expense of increased computational complexity. These algorithms, which are summarized by Equations 12-16 above, may be used within the wideband predistorter design as a direct replacement for the LMS algorithm and employed in an identical manner. [0286] 3.3.4.3.3. Extended Kalman Filter for Nonlinear Estimation Scenarios. [0287] As indicated above and shown in FIGS. 3 and 4, the DCSP may include a correction circuit [0288] This deficiency may be readily overcome by employing the extended Kalman filter algorithm which is designed to solve this class of problem. The ability of the extended Kalman filter to identify the system components despite the nonlinear interactions is achieved because the algorithm identifies the interactions between parameters as well as the parameters themselves. This naturally causes a significant increase in computational complexity. Consequently, this algorithm is preferably used only if it can be identified that nonlinear interactions between compensation parameters occur. The extended Kalman filter algorithms for nonlinear estimation environments summarized by Equations 17-22 above. [0289] 3.3.4.3.4. Convolution Update [0290] Regardless of the actual adaptation algorithm used (see Sections 3.3.4.3.1-3), the iterative update of each FIR filter traditionally proceeds according to Equation 28, where X X [0291] In the adaptive scenario illustrated in FIG. 24A, the ACPCE [0292] A more effective update which permits a decrease in convergence time is to employ a convolutional update as defined by Equation 29. The difficulty associated with this approach is that the span of the FIR filter continues to grow with each update. This problem is readily overcome by simply truncating the length of the updated FIR filter coefficients X X [0293] 3.3.5. State 5: Algorithms, Compute, Store and Load DCSP Correction Coefficient Parameters [0294] After the procedures of the previous sections have been completed, the SID process is considered complete. However, before engaging in active operation and entering the system acquisition and tracking phases (SAT), the ACPCE stores the computed DCSP coefficients in non-volatile memory so that the SID calibration need not be repeated if a power failure or system re-start occurs. Suitable memory technologies range from a field programmable metal mask, EEROM, flash ROM etc. Because the amplifier may operate at several different carrier frequencies, the SID process may be repeated several times before the SAT phase of operation is engaged. Under such a design requirement, several sets of SID DCSP parameters are stored in non-volatile memory so that rapid switches in the operating conditions can occur. In specific designs that are expected to display significant aging characteristics, periodic SID re-calibrations may be permissible, and as such, additional non-volatile storage such as FLASH ROM may be desirable to improve the reliability of the design. [0295] 3.3.6. ACPCE System Adaptation and Tracking Algorithms [0296] Upon entering the track and update state (5) in FIG. 9, the ACPCE loads the previously computed compensation parameter values into the compensation circuit (DCSP). During the lifetime of the transmission event, the physical characteristics of the analog components may change as a function of temperature, aging, power supply droop etc.; consequently, the compensation parameters are adjusted to continually track and compensate for these changes. [0297] The algorithms used to support this functionality are preferably identical to those used to initially evaluate the compensation parameters as described in the previous section. An important difference, however, is that the actual physical amplifier [0298] As illustrated in FIG. 29, the DCSP [0299] The above process of capturing observed data sequences, combined with numerical off-line computation, is repetitively used to ensure that the current values of the compensation parameters are sufficiently accurate to maintain regulatory power spectral emission, system modulation accuracy and amplifier NPR requirements. The accuracy of the parameter estimation can be enhanced by iterative updating of the parameters. Rather than calculating new parameters based only on the information in one sample sequence capture, the amount of change of the parameters can be controlled by calculating a weighted average of the current calculated values with progressively smaller contributions from previous parameter calculations. With this technique, the newly calculated parameters do not change significantly or suddenly on each training calculation due to non-ideal characteristics of the data of particular sample sets. This type of long term averaging helps to achieve a better overall correction rather than one that “jumps” around the ideal position. [0300] As mentioned above, the transition from one parameter set to the next may be applied in steps spread over an interval of time to avoid sudden changes in the amplifier outputs. This may be done by looking at the new and previous parameter values, after the averaging described above (if used), and generating a sequence of parameter values on an interpolated path between the two sets of values. These would then be programmed into the filters and other correction systems in succession at intervals such that the change is made smooth and gradual. In an identical manner to that incurred while seeking the FIR coefficients of the amplifier model, the computation and update of each FIR filter's coefficients occurs on a FIR filter by FIR filter basis, with selection of a particular FIR filter being governed by the addressing of the data table [0301] In one embodiment, only the FIR filter coefficients of the DCSP data structure are evolved during the acquisition and tracking phase. It has been determined that due to the mechanical stability of the amplifier assemblies there is no requirement to adaptively adjust the time constant and span of the parameters of the differentiator [0302] 3.3.6.1. Summary of Update Algorithms [0303] To summarize, the algorithms used during the system acquisition and tracking state for DCSP compensation parameter estimation are described in the following sections. [0304] LMS Update: Sections 3.3.4.3.1 and 3.3.2.3.4.2 [0305] Recursive Least Squares (direct form) also known as Kalman Filter update: Sections 3.3.4.3.2 and 3.3.2.3.4.3. [0306] Extended Kalman Filter for Nonlinear Estimation Scenarios: Section 3.3.4.3.3 and 3.3.2.3.4.4. [0307] As discussed above, a convolution update may be used to achieve faster convergence during SID as a result of acknowledging particular attributes of the system architecture. In a similar manner, this technique can be directly applied during the SAT phase of operation. The discussion provided in Section 3.3.4.3.4 is directly applicable during SAT and as a consequence no further details are provided. [0308] 3.3.6.2. Non-Linear Filtered-input Adaption Mode [0309]FIG. 29, discussed above, illustrates a preferred embodiment for the operation of the SAT adaption algorithms that are utilized for normal operation during tracking mode. While in the tracking mode, the ACPCE adapts to the slowly changing amplifier effects that occur as a function of the age, temperature and operating conditions such as power supply variations. When compared to the direct non-linear pre-equalization structure illustrated in FIG. 29 it can be seen that a significant expansion of the ACPCE's signal processing requirements has occurred. This mode is regarded as the non-linear filtered-input adaption mode and consists of two mutually coupled adaption engines. The motivation for this approach occurs because this embodiment provides increased stability, less adaption jitter, resilience to system noise and rapid adaption rates. The adaption performance is sufficiently rapid that although more signal processing computation is required per iteration than the previously disclosed method, the total number of iterations is actually reduced, which actually results in a lower computation burden for the ACPCE DSP engine. [0310] As depicted in FIG. 30, the ACPCE adaption engine's mutually coupled adaption engines consist of a primary engine [0311] The circuit also permits the secondary adaption engine [0312] The inverse and forward estimators are also utilized to adapt the inverse and forward amplifier models [0313] The inverse and forward amplifier models are not required to be of equal complexity. For instance, the forward amplifier model [0314]FIG. 30 further schematically depicts the application of the filtered-input adaptation algorithm to the predistortion. Compared to the standard pre-equalizer configuration, the filtered-input pre-equalizer configuration adds a forward amplifier model which is simultaneously adapted by the secondary loop to model the unknown distortion generated by the physical power amplifier [0315] 4. Example Hardware Implementations [0316]FIG. 31 A illustrates a typical implementation of the wideband predistortion amplifier system [0317] Because the ACPCE operates in non-real-time, the ACPCE is preferably implemented using a general purpose DSP or microprocessor [0318] As depicted in FIG. 31A, the DCSP core [0319] The ASIC or FPGA includes a modest amount of ‘glue logic’ [0320] The implementation shown in FIG. 31A uses direct conversion upconversion (block [0321]FIG. 31B illustrates an alternate RF-in/RF-out embodiment that may be used if a digital baseband data source is not available. Naturally the core DCSP and ACPCE processes are identical with a few minor modifications to accommodate the imperfections of the input downconversion process and the digital drive circuitry. [0322] 5. Variations, Enhancements and Applications [0323] This section details several example variations, enhancements and applications of the predistortion architecture and methods set forth above. [0324] 5.1. Control of Multiple Amplifiers in a Predistorter for Maximizing Power Efficiency [0325]FIG. 32 illustrates an embodiment that may be employed in CDMA third generation cellular systems which use a multicarrier-multibearer airlink structure. The amplification chain [0326] By way of background, in periods of high calling rates, the amplifier system may be required to support in excess of 64 users for which each user signal is multiplexed onto a shared RF carrier. Each user may require up to 4 watts of RF power, so the aggregate peak power of the amplifier may readily exceed 256 watts. In practice such power levels are generated by employing multiple power amplifier modules, as depicted in FIG. 32. During periods of low traffic activity, i.e., a low number of active users, the power amplifier [0327] In accordance with one aspect of the invention, this wasted energy is eliminated or significantly reduced by utilizing the ACPCE [0328] The individual amplifiers [0329] Although this strategy of separately controlling the individual amplifiers [0330] The ACPCE can store or generate separate DCSP compensation parameters for each of the possible operating states associated with the manipulation of the amplifier operating points. This may be accomplished, for example, by adding an additional dimension to the multi-dimensional data structure [0331] An important artifact of this multi-amplifier control strategy is that the DCSP coefficients should reflect a gain increase or decrease to compensate for the change in average power that occurs when an amplifier module's state is changed. In practice this is straightforward since the DCSP coefficients maintain the overall loop gain at a constant value despite the reduction in maximum peak power capability. [0332] 5.2. Control of Multiple Independent Amplifiers for Antenna Array Applications [0333]FIG. 33A illustrates how the predistortion architecture may be employed in a transmission antenna array system [0334] The predistortion architecture as described above can be applied directly to each antenna section independently; however, such an approach is costly because excess components and physical space are required. Because the nonlinear power amplifier's characteristics change very slowly as a function of temperature, aging and mechanical stress, it is feasible to use a single ACPCE which computes updated parameters for multiple DCSP compensation circuits [0335] 5.3. Control of Multiple Independent Amplifiers for Hot Swap Redundant Applications [0336]FIG. 34 illustrates how the predistortion architecture may be employed in a hot swap redundant power amplifier assembly. Redundant hot swap amplifier assemblies are frequently utilized in cellular systems, and will be a prime requirement in multicarrier-multibearer systems such as W-CDMA and CDMA-2000 cellular systems. Redundant designs ensure that call availability is not compromised. This requires each amplifier assembly to support redundant amplifiers that are continually stimulated with a drive signal but with the generated power dumped in a dummy load. Should an amplifier fail or degrade in performance beyond the control of the predistortion system, the ACPCE can readily switch input signal streams and RF routing networks to ensure that the redundant amplifier is used while the failing amplifier is taken out of operation. Since the non-linear power amplifier's characteristics change very slowly as a function of temperature, aging and mechanical stress, it is feasible to utilize a single ACPCE which computes updated parameters for multiple DCSP compensation circuits [0337] 5.4. Signal Pre-Conditioning Algorithms [0338] The predistortion architecture described above also permit an amplifier system to be created which exhibits a perfect or near-perfect linear response up to the nonlinear amplifier's maximum output power. If the input signal is appropriately scaled such that its maximum input amplitude/power corresponds to the amplifier's maximum output power, then the power spectral density of the amplifier's output signal will be the same as that of the input signal. This permits a system designer the freedom to use spectral (radio) resources in an aggressive manner in an effort to maximize system capacity. [0339] In third generation CDMA systems, the amplifier is commonly employed to amplify signals with a very high peak-to-average ratio, typically greater than 12 dB. In such scenarios, the average output power of the amplifier is 12 dB lower than the maximum power of the amplifier and, as a consequence, amplifier efficiency is significantly reduced. As depicted in FIG. 35, system power efficiency may be increased in such scenarios by adding a signal pre-conditioning and compression circuit [0340] The circuit [0341] As illustrated in FIG. 36( [0342] An important feature of the digital pre-conditioning and compression circuit [0343] Equation 30 defines a family of soft compression functions which only invoke AM-AM distortion in the input signal. The equation has parameters α and β which correspond to the degree of non-linearity invoked and maximum input power.
[0344] Clearly, as α increases the amplifier's efficiency increases with an associated increase in spectral regrowth. Manipulation of β permits a hard clipping level to set. Equation 30 is disclosed as an exemplary function providing a non-linear pre-conditioning and pre-compression function. In practice, any function or non-linear equation that exhibits behavior that incurs desirable changes in the waveform may be employed. It is not unreasonable to imagine that deliberate insertion of AM-PM may, on occasion, be an attribute requiring an alternate function. [0345] The DCSP compensation parameters should be computed and adaptively adjusted assuming that the output of the pre-conditioning circuit [0346] 5.4.1. Implementations Modes [0347]FIG. 35, discussed above, illustrates a circuit topology that would permit the inclusion of signal pre-compression/pre-conditioning signal processing to be readily utilized in conjunction with the digital predistortion architecture. A practical design would be to construct the circuit using a simple non-linear hardware function constructed from a set of multipliers and coefficients that provide a polynomial representation of the pre-conditioning/pre-conditioning function. Such a design is illustrated in FIG. 37. [0348] As illustrated in FIG. 37, the pre-conditioning/pre-compression function of Equation 30 has been effectively implemented as a hardware representation [0349] An alternate design to obviate the increase in latency is to realize that the basic multidimensional predistortion data structure [0350] The design illustrated in FIG. 37 can be implemented in software running on a DSP, but for wideband applications, the design is preferably implemented within a circuit of an FPGA, ASIC, or other automated hardware device. Since the ACPCE can be utilized to capture the input signal stream or the pre-conditioned input signal stream, operation of the predistortion and pre-conditioning circuits can proceed as normal because the ACPCE will be provisioned with software copies of the entire structure that it is controlling. In practice, the extensive capabilities of these cascade non-linear functions exceeds the necessities of typical pre-conditioning/pre-compression functions. This permits the pre-conditioning/pre-compression multi-dimensional data structure to be reduced to a single dimension, indexed by the input signal magnitude, and furthermore, store only a single pre-conditioning coefficient that is multiplied with the input signal data. That is, the pre-conditioning filter reduces to a single tap FIR filter. [0351] Although, the above design approach reduces the latency and power consumption of the previous pre-conditioning/pre-compression circuit [0352] The multi-dimensional data structure design approach (FIG. 38) for the pre-conditioning/pre-compression circuit [0353] It will be noticed that the composite circuit, which implements both the pre-conditioning/pre-compression circuit and the DCSP, is identical in structure to the multi-dimensional predistorter (DCSP) design [0354] 5.4.2. Adaptive Computation and Modeling for the Composite Pre-condition/Precompression and Predistortion System by the ACPCE [0355] The adaption and computation of the DCSP's coefficients when operating in the cascaded pre-conditioning and predistortion mode precedes as shown in FIG. 41. As illustrated in FIG. 40, in the normal predistortion mode (no pre-conditioning), the ACPCE would ordinarily capture the input signal Vm(t) and the observed output of the amplifier Vf(t) sample sequences. These captured sequences would be processed in non-real time to form the error sequence Ve(t) by subtracting the time, phase and gain aligned sequences Vm(t) and Vf(t). These three sequences would then be processed by the ACPCE to compute, in an adaptive manner, DCSP coefficients that could be downloaded to the DCSP. The repetition of this process results in a set of DCSP coefficients that causes the error sequence to converge to the noise floor of the system, i.e., the error free condition. [0356] As illustrated in FIG. 41, introduction of a preconditioning non-linearity is readily achieved by first modifying the captured input signal sequence Vm(t) by the pre-conditioning function [0357] Direct application of the above approach will cause a failure in the system to converge. The convergence failure can easily be identified and appropriate simple correction steps taken. The convergence failure occurs because the hardware (or software implementation) of the DCSP is operating in real time and utilizes the input signal Vm(t) to compute the indexes/address values into the multi-dimensional data structure, while the non-real-time ACPCE, if a literal interpretation of the proceeding argument is adopted, would utilize Vp(t) as the input signal to the entire adaption process. The disconnect occurs because the DCSP would utilize Vm(t) to generate indices while the ACPCE would utilize Vp(t). This disconnect between the real time process and the non-real-time adaptive process is easily eliminated if slight changes are taken, as portrayed in FIG. 42. [0358] As illustrated by the modified process flow of FIG. 42, the ACPCE utilizes Vm(t) to generate all index/address values computed as a function of the instantaneous and past properties of the input waveform employed to address the multi-dimensional data structure. However, Vm(t) is preconditioned to form Vp(t), which is utilized to generate the error function. This forces the ACPCE to compute DCSP coefficients that generate the desired system response from the cascade of non-linearities. This occurs because Ve(t) is still reduced to a zero mean error condition by adaptively adjusting the DCSP coefficients. [0359] 5.5. Table Updating Techniques [0360] A practical implementation of the DCSP [0361] If this design approach is taken, the amount of memory utilized by the entire design will equate to three times that required for a single copy of the multi-dimensional data structure [0362] These disadvantages can potentially be overcome by employing dual port RAM which permits two external devices to read and write to the memory at the same time. However, the internal dual port RAM actually delays either the read command until the write command is complete, or vice versa, in the advent of timing contention. Since the DCSP preferably operates on a clock cycle by clock cycle basis without any interruption of the flow of correction coefficients (that are provided on a sample by sample basis), dual port RAM is not appropriate unless the device is over clocked (e.g., by a 2× factor). Because overclocking increases power consumption, it is not preferred unless possibly the overall clock speed of the application is quite slow. [0363]FIG. 44 illustrates a preferred multi-dimensional data structure design that overcomes the deficiencies described above, including providing reduced silicon area and power consumption. The DCSP [0364] After new coefficients are downloaded to the multi-dimensional data structure segment [0365] The address mappers [0366] Each time the ACPCE downloads new coefficients to the ‘free’ segment, the entire memory address map of the ‘free’ address mapper [0367] The address mappers [0368] The multi-dimensional data structure [0369] 5.6. Event Driven Capture Apparatus and Modes of Operation [0370] As discussed in Section 2.2, operation of the predistortion system ordinarily proceeds with the ACPCE capturing sequences of digital input signal samples and sequences of the digitized observed feedback signal from the power amplifier [0371] As illustrated in FIG. 45, the DCSP is augmented with two capture buffers/memories [0372] An important feature of the design is that the address counters [0373] In a preferred embodiment, the ACPCE may command this data capture controller to operate in one of four modes: [0374] mode 1:-free run [0375] mode 2:-capture upon command [0376] mode 3:-free run event driven capture with delayed cessation [0377] mode 4:-event driven capture [0378] Each mode is described below with further reference to FIG. 45. [0379] 5.6.1. Capture Mode 1 [0380] Mode 1 is the free running mode, and operation occurs in the following manner. The address counters [0381] This mechanism is continually exercised until the ACPCE issues a command to the data capture controller [0382] If the first approach is used, the ACPCE can read the address counters' state to determine when the data captured ceased. Naturally, when the ACPCE has uploaded the sequences, integer and fractional delay differences between the two sequences need to be computed before error signal sequences can be derived. Once the ACPCE has up loaded the data, it may command the data capture controller [0383] 5.6.2. Capture Mode 2 [0384] Mode 2 is the “capture upon command” mode. This operating mode permits the DCSP to be utilized in a power saving mode. The data capture controller [0385] Typically this mode is utilized in applications where the ACPCE is used to control multiple DCSP entities, such as in smart antenna arrays and hot swap architectures as described above. When operating in such a system, the ACPCE initiates data collection by commanding a specific DCSP's data capture controller [0386] 5.6.3. Capture Mode 3 [0387] Mode 3 is the “free run event driven capture with delayed cessation” mode. This mode of operation is important to overcoming a particular difficulty that is encountered when using multi-dimensional data structures [0388] A particular example is when the EDGE waveform is utilized. This waveform is designed to have a very low probability of a low envelope absolute magnitude, thus nearly eliminating the probability that the lower amplitude and integrated past amplitude regions of the data structure are exercised. This problem is exacerbated when, while operating in mode 1 or 2, the ACPCE only processes a fraction of the input data and thus captures and uploads data less frequently. [0389] Mode 3 operation obviates this problem by continually capturing data in a manner identical to mode 1 while searching for a rare events. This is achieved by permitting the ACPCE to program the data capture controller [0390] When the terminal counter [0391] 5.6.4. Capture Mode 4 [0392] Mode 4 is the event driven capture mode, and operation occurs in a manner very similar to mode 3. Mode 4 operates by freezing the address counters [0393] 5.6.5. Technology Summary [0394] The approach outlined above for modes 2 and 3 is a highly optimal solution because it directly utilizes the DCSP's multi-dimensional data structure's addressing computations to distinguish rare events. The comparison is easily achieved by utilizing programmable logic to create simple bitmaps that need to be compared for logical equivalence. This is also attractive because the power consumption utilized by the device is also reduced because independent circuitry is not required to detect the rare events. [0395] The ACPCE can identify the infrequently accessed areas of the data structure [0396] 5.7. Temperature Sensor LUT operation [0397]FIG. 46 illustrates a modified DCSP system [0398] This approach reduces the amount of signal processing and ACPCE estimation processing that has to be undertaken. The approach is attractive because the sampling rate of the transistor die temperature is quite slow, for it is defined by the thermal time constant (typically hundreds of milliseconds) of the amplifier assembly. [0399] In operation, the temperature sensor's output is sampled by an A/D converter [0400] 5.8. Utilization of Interpolation in the DCSP for Improved Noise Floor and Linearity [0401] As explained above, the wideband predistorter system operates by repeatedly observing the wideband amplifier's output signal Vf(t) and the input information bearing signal Vm(t). The ACPCE then computes the errors between the observed and ideal signals to create an updated set of DCSP compensation parameters that should reduce the error between the ideal and observed signals. In practice, any change in the DCSP compensation parameters, when downloaded will cause a step change in the waveform fed to the amplifier input. [0402] In highly specialized scenarios where the small step change causes an unacceptable short term rise in spurious component generation, the effect can be reduced by interpolation of the DCSP compensation parameters. When interpolation is used, the normal update vector X [X [0403] Rather, the downloaded vector is modified by interpolation, and multiple downloads occur for a period of time with the final download being defined by the target update X [0404] Equation 33 defines that the overall update process provides update vectors which do not exhibit overall gain changes as a result of interpolation. To those familiar with numerical interpolation, Equations 32 and 33 define simple linear interpolation. Higher order interpolation functions could alternatively be used, but experimental experience has shown that linear interpolation is adequate for suppressing spurious responses in critical applications such as Motorola's InFlexion paging system.
_{+}=α+β Equation 33
[0405] The iterative download approach proceeds by computing X′ [0406] 5.9. Multiple Memory Allocations for Different PSD Combinations/Channel Allocations [0407] Frequency hopped spread spectrum systems, such as the second generation GSM and EDGE cellular system, operate over very wide operating bandwidth. In many systems, and especially those operating at LMDS frequencies (40 GHz), the operating bandwidth exceeds the correcting bandwidth of the basic wideband predistorter design described above. FIG. 47 illustrates an extension to the basic design where the ACPCE is enhanced by providing extended memory storage capabilities [0408] In simple systems, the ACPCE may be provided with explicit new frequency commands from the base station's radio resource management entity which identifies the hopping sequence and current hop frequency. Alternatively, the ACPCE may determine the hopping sequence executed by the base station. This is readily achieved by the ACPCE because each unique hopping frequency will identified by a specific loop gain and phase shift. Furthermore, each carrier frequency of operation will be identified by a unique distortion signature which the ACPCE may compute and use to identify a particular carrier frequency. [0409] 5.10. Dual FIR Filter Wideband Predistorter Construction [0410] The nonlinearity characteristic exhibited by an amplifier becomes increasingly complex as the power handling capability of the amplifier increases. In very high power applications, the DCSP architecture outlined above may become prohibitively large, and as such, prevent effective implementation. Specifically, if the dimensionality of the FIR filter's data structure [0411]FIG. 48 illustrates an alternate design in which the FIR filter coefficients stored in the data structure locations are separated into a bulk “quasi” static FIR filter [0412] The two filters [0413] A further enhancement is to permit the quasi static filter [0414] 5.11. Functional Wideband Predistorter Construction Approach [0415] If the nonlinearity exhibited by the amplifier is exceptionally severe, then the size and complexity of the DCSP's data structure [0416]FIG. 50 illustrates an embodiment which uses this approach combined with the use of dynamic and quasi static FIR filters as described in the previous section. The data structure [0417] This technique represents a computationally more complex approach but yields a design that is potentially easier and smaller to implement in silicon. Typically a 12×12 multiplier in silicon requires 2000 transistor gates and each bit of memory storage requires a minimum of 4 gates. Thus, for a given level of complexity, it is relatively easy to determine whether the DCSP should be constructed from a functional tap computation approach or from a mass data structure approach (or a hybrid approach). Naturally, the approach outlined in this section increases the computational burden upon the ACPCE because of the surface functional fitting requirement. The approach is also attractive because the tap coefficients will not be subject to quantization noise due to the continuous function that is used to compute the tap values as a function of the variation in envelope properties. [0418] 5.12. Fast AGC Loop for Constant Operating Point [0419] It is not uncommon for very high power amplifiers (typically greater than 10 Watt peak power capability) to exhibit small variations or oscillations in bulk gain and phase response as a function of time. Typically, these oscillations have periods that span several seconds to several minutes. Ordinarily, such oscillations are eliminated by the adaptation process executed by the ACPCE. The ACPCE continually adjusts the DCSP's compensation parameters so that the loop gain, and hence amplifier's response, maintains a constant bulk phase and gain response. However, in scenarios where the amplifier may oscillate faster than the ACPCE can adapt, the inclusion of a fast automatic gain control (AGC) is highly desirable for maintaining performance. A typical scenario is the antenna array application (Section 5.2) where the ACPCE is responsible for supervising and ensuring that the DCSP's compensation parameters for multiple amplifier assemblies are current. [0420]FIG. 51 illustrates how a fast AGC component [0421] This approach is attractive because it permits the ACPCE to rapidly adjust a single AGC parameter and return to the detailed and extended computations used for the update of an alternate amplifier's DCSP's compensation parameters. That is, the ACPCE could rapidly adjust 8 independent amplifiers of an antenna array and then return to adapting the DCSP's coefficients of the first amplifier. Prior to computing the DCSP coefficients for the second amplifier, the ACPCE could rapidly readjust the AGC parameters of an entire set of antenna array amplifiers. The antenna design described in Section 5.2 can be used for this purpose. [0422] 5.13. Reduction in Data Structure Noise by Localized Dimension Updating [0423] The wideband noise floor exhibited by the wideband predistortion architecture is partially defined by the adaptation noise contained within the DCSP's data structure [0424] This contribution to the overall wideband noise characteristic of the design can be overcome by expanding the DCSP compensation coefficient update process. Ordinarily, the input signal trajectory and associated second order statistics cause a specific entry in the DCSP's data structure [X [0425] Ordinarily, this update is applied to the single data structure entry. However, since the amplifier is characterized by a smooth nonlinear function with continuous partial derivatives, an update vector applied at a particular point in the multi-variate space is also strongly applicable to points closely located. Thus, the update equation defined by Equation 46 may be updated according to Equation 36. [X [0426] The equation defines that if the point (n,m,p) in the data structure space has been selected for updating, then all other points within the data space are also updated. However, the update gain Δ is now a function of the distance between the initial indexed point (n,m,p) and the updated entry (x,y,z). Naturally, this function equals unity when the initial indexed point and the updated entry are identical. Otherwise, the function rapidly decays to zero so that only the very localized data structure points surrounding the initial indexed point are updated. [0427] This approach is attractive because the updates now become correlated and connected to the neighboring entries within the data structure [0428] The ACPCE preferably does not blindly execute all updates, but rather performs updates for which the update gain is non zero. In practice, Gaussian functions have proven to be ideal for the update gain. Naturally, a decaying function may also be used. [0429] 5.14. Frequency Domain Smoothing [0430] Management of overall system noise is an important consideration when dealing with digitally controlled amplifier designs. As discussed in the previous section 5.13, the wideband predistorter can introduce wideband noise due to independent adaptation errors that are associated with individual sets of FIR filter compensation coefficients stored within the DCSP data structure. An alternative method of reducing these effects is to use frequency domain smoothing upon the DCSP coefficients. This function is preferably undertaken by the ACPCE which symmetrically zero pads the time domain FIR filter coefficients and converts each FIR filter's impulse response, h(t), to its frequency domain, H(w), counterpart via the FFT. In the frequency domain, the filter's frequency domain response, H(w), is modified by computing a new H(w) that is derived from a weighted sum of the frequency domain response of the neighboring filters. This process is carried out in an identical manner for each filter stored in the DCSP's data structure [0431] The purpose of filtering in the frequency domain is to ensure that upon the re-computation of each filter's updated time domain impulse response, h(t), the partial derivatives of the nonlinear function approximated by the DCSP's data structure [0432] 6. Conclusion [0433] This inventive predistortion architecture, methods and components set forth above are applicable generally to any amplifier for bandlimited wideband RF signals. The techniques can be used for multiple signals and for any modulation scheme or combination of modulations. Where multiple signals are amplified, the signals can each have any modulation type. [0434] The bandwidth of operation preferably does not exceed one octave of carrier frequency because of the harmonics generated in the nonlinear amplifiers. This is a normal limitation on the use of any nonlinear amplifier. In most applications at high RF frequencies the bandwidth will be limited by the maximum clocking frequency of the digital processing hardware. [0435] The predistortion architecture provides an alternative to the existing techniques of Cartesian feedback, LINC and feedforward. Each technology has its advantages and disadvantages. A system which uses the predistortion techniques set forth herein is generally simpler to implement than the other linearized amplifier types. Furthermore, the approach provides linearization performance that surpasses previously known and documented predistortion linearized power amplifiers. [0436] The power conversion efficiency is determined in large part by the type of signals to be amplified. For amplifying a single channel of a QAM or PSK type signal, the wideband digital predistortion efficiency is better than for other amplifier types. For high peak-to-average ratio signals, the efficiency is not significantly different from the other methods. The purity of the output signal is excellent and is better than the current feedforward products. [0437] The digital control features of the invention could be implemented in a custom integrated circuit for application to a variety of amplifier combinations and in supporting various up and downconversion systems. [0438] The predistortion architecture is commercially significant because, for example, wideband third generation cellular basestation designs for W-CDMA, IMT-2000 and UMTS-2000 require ultra linear power efficient multicarrier amplification. Currently, this requirement is not fulfilled by commercially available amplifier designs. The preferred embodiments of the invention fulfil this commercial requirement. The design is also applicable to other commercial systems such as point-to-point, point-to-multipoint, wireless local loop, MMDS and LDMS wireless systems. The approach is also applicable to existing cellular systems and may be used to reduce the cost of design in subsequent manufacturing cost reductions. The predistortion techniques will also find utility in the satellite, cable broadcast and terrestrial broadcast industries where linear amplification is required. The design is particularly suitable for applications where digital radio and television signals require amplification without incurring distortion. Other embodiments and applications for the inventions will be apparent to those skilled in the art. [0439] Although the invention has been disclosed in the context of certain preferred embodiments and examples, it will be understood by those skilled in the art that the present invention extends beyond the specifically disclosed embodiments to other alternative embodiments and/or uses of the invention and obvious modifications and equivalents thereof Thus, it is intended that the scope of the present invention herein disclosed should not be limited by the particular disclosed embodiments described above, but should be determined only by a fair reading of the claims that follow. Referenced by
Classifications
Legal Events
Rotate |