
[0001]
This application claims priority from U.S. Provisional Application Ser. No. 60/374,886, filed Apr. 22, 2002, U.S. Provisional Application Ser. No. 60/374,935, filed Apr. 22, 2002, U.S. Provisional Application Ser. No. 60/374,934, filed Apr. 22, 2002, U.S. Provisional Application Ser. No. 60/374,981, filed Apr. 22, 2002, U.S. Provisional Application Ser. No. 60/374,933, filed Apr. 22, 2002, the entire contents of which are incorporated herein by reference.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT

[0002]
This invention was made with Government support under Contract No. ECS9979443, awarded by the National Science Foundation, and Contract No. DAAG559810336 (University of Virginia Subcontract No. 525127) awarded by the U.S. Army. The Government may have certain rights in this invention.
TECHNICAL FIELD

[0003]
The invention relates to communication systems and, more particularly, transmitters and receivers for use in wireless communication systems.
BACKGROUND

[0004]
In wireless mobile communications, a channel that couples a transmitter to a receiver is often timevarying due to relative transmitterreceiver motion and multipath propagation. Such a timevariation is commonly referred to as fading, and may severely impair system performance. When a data rate for the system is high in relation to channel bandwidth, multipath propagation may become frequencyselective and cause intersymbol interference (ISI). By implementing Inverse Fast Fourier Transform (IFFT) at the transmitter and FFT at the receiver, Orthogonal Frequency Division Multiplexing (OFDM) converts an ISI channel into a set of parallel ISIfree subchannels with gains equal to the channel's frequency response values on the FFT grid. Each subchannel can be easily equalized by a singletap equalizer using scalar division.

[0005]
To avoid interblock interference (IBI) between successive IFFT processed blocks, a cyclic prefix (CP) of length greater than or equal to the channel order is inserted per block at the transmitter and discarded at the receiver. In addition to suppressing IBI, the CP also converts linear convolution into cyclic convolution and thus facilitates diagonalization of an associated channel matrix.

[0006]
Instead of having multipath diversity in the form of (superimposed) delayed and scaled replicas of the transmitted symbols as in the case of serial transmission, OFDM transfers the multipath diversity to the frequency domain in the form of (usually correlated) fading frequency response. Each OFDM subchannel has its gain being expressed as a linear combination of the dispersive channel taps. When the channel has nulls (deep fades) close to or on the FFT grid, reliable detection of the symbols carried by these faded subcarriers becomes difficult if not impossible.

[0007]
Errorcontrol codes are usually invoked before the IFFT processing to deal with the frequencyselective fading. These include convolutional codes, Trellis Coded Modulation (TCM) or coset codes, Turbocodes, and block codes (e.g., ReedSolomon or BCH). Such coded OFDM schemes often incur high complexity and/or large decoding delay. Some of these schemes also require Channel State Information (CSI) at the transmitter, which may be unrealistic or too costly to acquire in wireless applications where the channel is rapidly changing. Another approach to guaranteeing symbol detectability over ISI channels is to modify the OFDM setup: instead of introducing the CP, each IFFTprocessed block can be zero padded (ZP) by at least as many zeros as the channel order.
SUMMARY

[0008]
In general, techniques are described for robustifying multicarrier wireless transmissions, e.g., OFDM, against random frequencyselective fading by introducing memory into the transmission with complex field (CF) encoding across the subcarriers. Specifically, instead of sending a different uncoded symbol per subcarrier, the techniques utilize different linear combinations of the information symbols on the subcarriers. These techniques generalize signal space diversity concepts to allow for redundant encoding. The CF block code described herein can also be viewed as a form of realnumber or analog codes.

[0009]
The encoder described herein is referred to as a “Linear Encoder (LE),” and the corresponding encoding process is called “linear encoding,” also abbreviated as LE when no confusions arise. The resulting CF coded OFDM will be called LEOFDM. In one embodiment, the linear encoder is designed so that maximum diversity order can be guaranteed without an essential decrease in transmission rate.

[0010]
By performing pairwise error probability analysis, we upper bound the diversity order of OFDM transmissions over random frequencyselective fading channels. The diversity order is directly related to a Hamming distance between the coded symbols. Moreover, the described LE can be designed to guarantee maximum diversity order irrespective of the information symbol constellation with minimum redundancy. In addition, the described LE codes are maximum distance separable (MDS) in the real or complex field, which generalizes the wellknown MDS concept for Galois field (GF) codes. Two classes of LE codes are described that can achieve MDS and guarantee maximum diversity order: the Vandermonde class, which generalizes the ReedSolomon codes to the real/complex field, and the Cosine class, which does not have a GF counterpart.

[0011]
Several possible decoding options have been described, including ML, ZF, MMSE, DFE, and iterative detectors. Decision directed detectors may be used to strike a tradeoff between complexity and performance.

[0012]
In one embodiment, a wireless communication device comprises an encoder that linearly encodes a data stream to produce an encoded data stream, and a modulator to produce an output waveform in accordance with the encoded data stream for transmission through a wireless channel.

[0013]
In another embodiment, a wireless communication device comprises a demodulator that receives a waveform carrying a linearly encoded transmission and produces a demodulated data stream, and a decoder that applies decodes the demodulated data and produce estimated data.

[0014]
In another embodiment, a method comprises linearly encoding a data stream with to produce an encoded data stream, and outputting a waveform in accordance with the data stream for transmission through a wireless channel.

[0015]
In another embodiment, a computerreadable medium comprises instructions to cause a programmable processor to linearly encode a data stream with to produce an encoded data stream, and output a waveform in accordance with the data stream for transmission through a wireless channel.

[0016]
The details of one or more embodiments of the invention are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the invention will be apparent from the description and drawings, and from the claims.
BRIEF DESCRIPTION OF DRAWINGS

[0017]
FIG. 1 is a block diagram illustrating an exemplary wireless communication system in which a transmitter and receiver implement linear preceding techniques.

[0018]
FIGS. 2A, 2B illustrate uncoded and GFcoded BPSK signals.

[0019]
FIG. 3 illustrates an example format of a transmission block for CPonly transmissions by the transmitter of FIG. 1.

[0020]
FIG. 4 illustrates an example format of a transmission block for ZPonly transmissions by the transmitter of FIG. 1.

[0021]
FIG. 5 illustrates sphere decoding applied in one embodiment of the receiver of FIG. 1.

[0022]
FIG. 6 illustrates an example portion of the receiver of FIG. 1

[0023]
FIG. 7 is factor graph representing an example linear encoding process.

[0024]
FIGS. 810 are graphs that illustrate exemplary results of simulations of the described techniques.
DETAILED DESCRIPTION

[0025]
FIG. 1 is a block diagram illustrating a telecommunication system 2 in which transmitter 4 communicates data to receiver 6 through wireless channel 8. Transmitter 4 transmits data to receiver 6 using one of a number of conventional multicarrier transmission formats including Orthogonal Frequency Division Multiplexing (OFDM). OFDM has been adopted by many standards including digital audio and video broadcasting (DAB, DVB) in Europe and highspeed digital subscriber lines (DSL) in the United States. OFDM has also been proposed for local area mobile wireless broadband standards including IEEE802.11a, MMAC and HIPERLAN/2. In one embodiment, system 2 represents an LEOFDM system having N subchannels.

[0026]
In general, the techniques described herein robustify multicarrier wireless transmissions, e.g., OFDM, against random frequencyselective fading by introducing memory into the transmission with complex field (CF) encoding across the subcarriers. In particular, transmitter 4 utilizes different linear combinations of the information symbols on the subcarriers. The techniques described herein may be applied to uplink and/or downlink transmissions, i.e., transmissions from a base station to a mobile device and vice versa. Consequently, transmitters 4 and receivers 6 may be any device configured to communicate using a multiuser wireless transmission including a cellular distribution station, a hub for a wireless local area network, a cellular phone, a laptop or handheld computing device, a personal digital assistant (PDA), and the like.

[0027]
In the illustrated embodiment, transmitter 4 includes linear encoder 10 and an OFDM modulator 12. Receiver 6 includes OFDM demodulator 14 and equalizer 16. Due to CPinsertion at transmitter 44 and CPremoval at receiver 6, the dispersive channel 8 is represented as an N×N circulant matrix {grave over (H)}, with [{grave over (H)}]_{i,j}=h((i−j)modN), where h(•) denotes the impulse response of channel 8:
$\begin{array}{cc}\stackrel{\sim}{H}=\left[\begin{array}{cccccccc}h\left(0\right)& 0& \cdots & \text{\hspace{1em}}& 0& h\left(L\right)& \cdots & h\left(1\right)\\ \vdots & h\left(0\right)& 0& \cdots & \text{\hspace{1em}}& \u22f0& \u22f0& \stackrel{\vdots}{h\left(L\right)}\\ h\left(L\right)& \vdots & \u22f0& \u22f0& \text{\hspace{1em}}& \cdots & 0& 0\\ 0& h\left(L\right)& \text{\hspace{1em}}& \u22f0& 0& \text{\hspace{1em}}& \text{\hspace{1em}}& 0\\ \vdots & 0& \text{\hspace{1em}}& \text{\hspace{1em}}& h\left(0\right)& \u22f0& \text{\hspace{1em}}& \vdots \\ \text{\hspace{1em}}& \vdots & \u22f0& \u22f0& \vdots & \u22f0& \text{\hspace{1em}}& 0\\ 0& \cdots & \text{\hspace{1em}}& 0& h\left(L\right)& \text{\hspace{1em}}& \cdots & h\left(0\right)\end{array}\right]& \left(1\right)\end{array}$
We assume the channel to be random FIR, consisting of no more than L+1 taps. The blocks within the dotted box represent a conventional uncoded OFDM system.

[0028]
Let F denote the N×N FFT matrix with entries [F]_{n,k=(}1/√{square root over (N)})exp(−j2πnk/N). Performing IFFT (postmultiplication with the matrix F^{H}) at the transmitter and FFT (premultiplication with the matrix F) at the receiver diagonalizes the circulant matrix {grave over (H)}. So, we obtain the parallel ISIfree model for the ith OFDM symbol as (see FIG. 1): x_{i}=D_{H}u_{i}+N_{i}, where
${D}_{H}:=\mathrm{diag}\left[H\left(j\text{\hspace{1em}}0\right),H\left(j\text{\hspace{1em}}2\pi \frac{1}{N}\right),\dots \text{\hspace{1em}},H\left(j\text{\hspace{1em}}2\pi \text{\hspace{1em}}\frac{N1}{N}\right)\right]={F}^{H}\stackrel{\sim}{H}F,$
with H(jw) denoting the channel frequency response at w; and n_{i}=Fñ_{i }standing for the FFTprocessed additive white Gaussian noise (AWGN).

[0029]
In order to exploit the frequencydomain diversity in OFDM, our LEOFDM design first linearly encodes (i.e., maps) the K≦N symbols of the ith block, s_{i}εS, where S is the set of all possible vectors that s_{i }may belong to (e.g., the BPSK set {±1}^{K×1}), by an N×K matrix ΘεC^{N×K }and then multiplexes the coded symbols u_{i}Θs_{i}εC^{N×1 }using conventional OFDM. In practice, the set S is always finite. But we allow it to be infinite in our performance analysis. The encoder Θ considered here does not depend on the OFDM symbol index i. Timevarying encoder may be useful for certain purposes (e.g., power loading), but they will not be pursued here. Hence, from now on, we will drop our OFDM symbol index i for brevity.

[0030]
Notice that the matrixvector multiplication used in defining u=Θs takes place in the complex field, rather than a Galois field. The matrix Θ can be naturally viewed as the generating matrix of a complex field block code. The codebook is defined as U:={ΘssεS}. By encoding a lengthK vector to a lengthN vector, some redundancy is introduced that we quantify by the rate of the code defined to be r=k/N, reminiscent of the GF block code rate definition. The set U is a subset of the C^{N×1}vector space. More specifically, U is a subset of the K dimensional subspace spanned by the columns of Θ. When S=Z^{K×1}, the set U forms a lattice.

[0031]
Combining the encoder with the diagonalized channel model, the ith received block after CP removal and FFT processing can be written as:
x=F{tilde over (x)}=F({tilde over (H)}F ^{H}Θs+{tilde over (η)}= D _{H} DΘs+η. (2)
We want to design Θ so that a large diversity order can be guaranteed irrespective of the constellation that the entries of s_{i }are drawn from, with a small amount of introduced redundancy.

[0032]
We can conceptually view Θ together with the OFDM modulation F^{H }as a combined N×K encoder Θ:=F^{H}Θ, which in a sense blends the singlecarrier and multicarrier notions. Indeed, by selecting Θ, hence Θ, the system in FIG. 1 can describe various single and multicarrier systems, some of them are provided shortly as special cases of our LEOFDM. The received vector {tilde over (x)} is related to the information symbol vector s through the matrix product {tilde over (H)} Θ.

[0033]
We define the Hamming distance δ(u,u′) between two vectors u and u′ as the number of nonzero entries in the vector u_{e}=u−u′ and the minimum Hamming distance of the set U as δ_{min}(U):=min{δ(u,u′)u,uεU}. When there is no confusion, we will simply use δ_{min }for brevity. The minimum Euclidean distance between vectors in U is denoted as d_{min}(U) or simply d_{min}.

[0034]
Because such encoding operates in the complex field, it does not increase the dimensionally of the signal space. This is to be contrasted to the GF encoding: the codeword set of a GF (n,k) code, when viewed as a real/complex vector, in general has a higher dimensionality (n) than does the original uncoded block of symbols (k). Exceptions include the repetition code, for which the codeword set has the same dimensionality as that of the input.
EXAMPLE 1
Consider the binary (3,2) block code generated by the matrix

[0035]
$\begin{array}{cc}{\left[\begin{array}{ccc}1& 0& 1\\ 0& 1& 1\end{array}\right]}^{T}& \left(3\right)\end{array}$
followed by BPSK constellation mapping (e.g., 0→−1 and 1→1). The codebook consists of 4 codewords
[−1 −1 −1]^{T}, [1 −1 1]^{T}, [−1 1 1]^{T}, [1 1 −1]^{T}. (4)
These codewords span the R^{3×1 }(or C^{3×1}) space and therefore the code book has dimension 3 in the real or complex field, as illustrated in FIG. 2.

[0036]
In general, a (n,k) binary GF block code is capable of generating 2^{k }codewords in an ndimensional space R^{n×1 }or C^{n×1}. If we view the transmit signal design problem as packing spheres in the signal space (Shannon's point of view), an (n,k) GF block code followed by constellation mapping packs spheres in an ndimensional space and thus has the potential to be better (large sphere radius) than a kdimensional packing. In our example above, if we normalize the codewords by a factor √{square root over (2/3)} so that the energy per bit E_{b }is one, the 4 codewords have mutual Euclidean distance √{square root over (8/3)}, larger than the minimum distance √{square root over (2)} of the uncoded BPSK signal set (±1, ±1). This increase in minimum Euclidean distance leads to improved system performance in AWGN channels, at least for high signal to noise ratio (SNR). For fading channels, the minimum Hamming distance of the codebook dominates high SNR performance in the form of diversity gain (as will become clear later). The diversity gain achieved by the (3,2) block code in the example is the minimum Hamming distance 2.

[0037]
CF linear encoding on the other hand, does not increase signal dimension; i.e., we always have dim(U)≦dim(S). When Θ has full column rank K_{1 }dim(U)=dim(S), in which case the codewords span a Kdimensional subspace of the Ndimensional vector space C^{K×1}. In terms of sphere packing, CF linear encoding does not yield a packing of dimension higher than K.

[0038]
We have the following assertion about the minimum Euclidean distance.

[0000]
Proposition 1 Suppose tr(ΘΘ^{H})=K. If the entries of sεS are drawn independently from a constellation A of minimum Euclidean distance of d_{min}(A), then the codewords in U:={ΘssεS} have minimum Euclidean distance no more than d_{min}(A).

[0039]
Proof: Under the power constraint tr(ΘΘ^{H})=K, at least one column of Θ will have norm no more than 1. Without loss of generality, suppose the first column has norm no more than 1. Consider s_{α}=(α,0, . . . ,0)^{T }and s_{β}=(β,0, . . ., 0)^{T}, where α and β are two symbols from the constellation that are separated by d_{min}. The coded vectors u_{α}=Θs_{α}and u_{β}=Θs_{β}are then separated by a distance nor more than d_{min}.

[0040]
Due to Proposition 1, CF linear codes are not effective for improving performance for AWGN channels. But for fading channels, they may have an advantage over GF codes, because they are capable of producing codewords that have large Hamming distance.
EXAMPLE 2
The encoder

[0041]
$\begin{array}{cc}\Theta ={\sqrt{\frac{4}{15}}\left[\begin{array}{ccc}1& 1& 1\\ 0.5& 0.5& 0.5\end{array}\right]}^{T},& \left(5\right)\end{array}$
operating on BPSK signal set S={±1}^{2}, produces 4 codewords of minimum Euclidean distance √{square root over (4/5)} and minimum Hamming distance 3. Compared with the GF code in Example 1, this real code has smaller Euclidean distance but larger Hamming distance. In addition, the CF coding scheme described herein differs from the GF block coding in that the entries of the LE output vector u usually belong to a larger, although still finite, alphabet set than do the entries of the input vector s.

[0042]
Before exploring optimal design of Θ, let us first look at some special cases of the LEOFDM system.

[0043]
By setting K=N and Θ=I_{N}, we obtain the conventional uncoded OFDM model. In such a case, the onetap linear equalizer matrix Γ=D_{H} ^{−1 }yields ŝ=Γx=s+D_{H} ^{−1}η, where the inverse exists when the channel has no nulls on the FFT grid. Under the assumption that {hacek over (η)} (hence η) is AWGN, such an equalizer followed by a minimum distance quantizer is optimum in the maximumlikelihood (ML) sense for a given channel when CSI has been acquired at the receiver. But when the channel has nulls on (or close to) the FFT grid ω=2πn/N, n=0, . . . , N−1, the matrix D_{H }will be illconditioned and serious noiseamplification will emerge if we try to invert D_{H }(the noise variance can become unbounded). Although events of channel nulls being close to the FFT grid have relatively low probability, their occurrence is known to have dominant impact on the average system performance especially at high SNR. Improving the performance of an uncoded transmission thus relies on robustifying the system against the occurrence of such lowprobability but catastrophic events. If CSI is available at the transmitter, power and bit loading can be used and channel nulls can be avoided, such as in discrete multitone (DMT) systems. If we choose K=N and Θ=F, then since F^{H}F=I_{N}, the IFFT F_{H }reverses the encoding and the resulting system is a singlecarrier block transmission with CP insertion (c.f., FIG. 3): {tilde over (x)}={tilde over (H)}s+{hacek over (η)}. The FFT at the receiver is no longer necessary.

[0044]
Let K=N−L. We choose Θ to be an N×K truncated FFT matrix (the first K columns of F); i.e., [Θ]_{n,k}=(1/√{square root over (N)})exp(−j2πnk/N). It can be easily verified that F^{H}Θ=[I_{K}, 0_{K×L}]^{T}:=T_{zp}, where 0_{K×L }denotes a K×L allzero matrix, and the subscript “zp” stands for zeropadding (ZP). The matrix T_{zp }simple pads zeros at the tail of s and the zeropadded block ũ=T_{zp}s is transmitted. Notice that H:={tilde over (H)}F^{H}Θ={tilde over (H)}T_{zp }is an N×K Toeplitz convolution matrix (the first K columns of {tilde over (H)}), which is always full rank. The symbols s can thus always be recovered from the received signal {tilde over (x)}=Hs+{tilde over (η)} (perfectly in the absence of noise) and no catastrophic channels exist in this case. The cyclic prefix in this case consists of L zeros, which, together with L zeros from the encoding process, result in 2L consecutive zeros between two consecutive uncoded information blocks of length K. But only L zeros are needed in order to separate the information blocks. CP is therefore not necessary because the L zeros created by Θ already separate successive blocks.

[0045]
ZPonly transmission is essentially a simple singlecarrier block scheme. However, viewing it as a special case of the LEOFDM design will allow us to apply the results about LEOFDM and gain insights into its performance. It turns out that this special case is indeed very special: it achieves the best highSNR performance among the LEOFDM class.

[0046]
To design linear encoder 10 with the goal of improving performance over uncoded OFDM, we utilize pairwise error probability (PEP) analysis technique. For simplicity, we will first assume that As1) The channel h:=[h(0), h(1), . . . , h(L)]^{T }has independent and identically distributed (i.i.d.) zeromean complex Gaussian taps (Rayleigh fading). The corresponding correlation matrix of h is R_{h}:=E[HH^{H}]=α_{L} ^{2}I_{L+1}, Where the constant α_{L}:=1/(L+1).

[0047]
Later on, we will relax this assumption to allow for correlated fading with possibly rank deficient autocorrelation matrix R_{h}.

[0048]
We suppose ML detection with perfect CSI at the receiver and consider the probability P(s→s′h), s,s′εS, that a vector s is transmitted but is erroneously decoded as s′≠s. We define the set of all possible error vectors S_{e}:={e:=s−s′s,s′εS,s≠s′}.

[0049]
The PEP can be approximated using the Chernoff bound as:
P(s→s′h)≦exp(−d ^{2}(y,y′)/4N _{0}), (6)
where N_{0}/2 is the noise variance per dimension, y:=D_{H}Θs, y′:=D_{H}Θs′, and d(y,y′)=∥y−y′∥ is the Euclidean distance between y and y′.

[0050]
Let us consider the N×(L+1)matrix V with entries [V ]_{n,t}=exp(−j2πnl/N), and use it to perform the Npoint discrete Fourier transform Vh to h. Note that D_{H}=diag(Vh); i.e., the diagonal entries of D_{H }are those in vector Vh. Using the definitions e:=s−s′εS_{e}, u_{e}:=Θe, and D_{e}:=diag(u_{e}), we can write y−y′=D_{H}u_{e}=diag(Vh)u_{e}. Furthermore, we can express the squared Euclidean distance d^{2}(y,y′)=∥D_{H}u_{e}∥^{2}=∥D_{e}Vh∥^{2 }as
d ^{2}(y,y′)=h ^{H} V ^{H} D _{e} ^{H} D _{e} Vh:=h ^{H} A _{e} h. (7)
An upper bound to the average PEP can be obtained by averaging (6) with respect to the random channel h to obtain:
$\begin{array}{cc}P\left(s\to {s}^{\prime}\right)\le \prod _{l=0}^{L}\text{\hspace{1em}}\frac{1}{1+{\alpha}_{L}{\lambda}_{e,l}/\left(4{N}_{0}\right)},& \left(8\right)\end{array}$
where λ_{e,0}, λ_{e,1}; . . . ; λ_{e,L }are the nonincreasing eigenvalues of the matrix A_{e}=V^{H}D_{e} ^{H}D_{e}V.

[0051]
If r_{e }is the rank of A_{e}, then λ_{e,l}≠0 if and only if lε[0, r_{e}−1]. Since 1+α_{L}λ_{e,l}/(4N_{0})>λ_{e,l}/(4N_{0}), it follows from (8) that
$\begin{array}{cc}P\left(s\to {s}^{\prime}\right)\le {\left(\frac{1}{4{N}_{0}}\right)}^{{r}_{e}}{\left(\prod _{l=0}^{{r}_{e}1}\text{\hspace{1em}}{\alpha}_{L}{\lambda}_{e,l}\right)}^{1}.& \left(9\right)\end{array}$
We call r_{e }the diversity order, denoted as G_{d,e}, and (Π_{l=0} ^{r} ^{ e } ^{−1}α_{L}λ_{e,l})^{1/r} ^{ e }the coding advantage, denoted as G_{c,e}, for the symbol error vector e. The diversity order G_{d,e }determines the slope of the average (w.r.t. the random channel) PEP (between s and s′) as a function of the SNR at high SNR (N_{0}→0). Correspondingly, G_{e,e }determines the shift of this PEP curve in SNR relative to a benchmark error rate curve of (1/4N_{0})^{−r} ^{ e }. When r_{e}=L+1, A_{e }is full rank, the product of eigenvalues becomes the determinant of A_{e }and therefore the coding advantage is given by α_{L}[det(A_{e})]^{1/(L+1)}.

[0052]
Since both G_{d,e }and G_{d,c }depend on the choice of e, we define the diversity order and coding advantages for our LEOFDM system, respectively, as:
$\begin{array}{cc}{G}_{d}:=\underset{e\in {e}_{}}{\mathrm{min}}{G}_{d,e}=\underset{e\in {e}_{}}{\mathrm{min}}\mathrm{rank}\left({A}_{e}\right),\mathrm{and}\text{\hspace{1em}}{G}_{e}:=\underset{e\in {e}_{}}{\mathrm{min}}{G}_{c,e}.& \left(10\right)\end{array}$

[0053]
We refer to diversity order herein to mean the asymptotic slope of the error probability versus SNR curve in a loglog scale. Often, “diversity” refers to “channel diversity,” i.e., roughly the degree of freedom of a given channel. To attain a certain diversity order (slope) on the error probability versus SNR curve, three conditions may be satisfied: i) Transmitter 4 is welldesigned so that the information symbols are encoded with sufficient redundancy (enough diversification); ii) Channel 8 is capable of providing enough degrees of freedom; iii) Receiver 4 is well designed so as to sufficiently exploit the redundancy introduced at the transmitter.

[0054]
Since the diversity order G_{d }determines how fast the symbol error probability drops as SNR increases, G_{d }is to be optimized first.

[0055]
We have the following theorem.

[0056]
Theorem 1 (Maximum Achievable Diversity Order): For a transmitted codeword set U with minimum Hamming distance δ_{min}; over i.i.d. FIR Rayleigh fading channels of order L, the diversity order is min(δ_{min}, L+1). Thus, the Maximum Achievable Diversity Order (MADO) of LEOFDM transmissions is L+1 and in order to achieve MADO, we ned δ_{min}≧L+1.

[0057]
Proof: Since matrix A_{e}=V^{H}D_{e} ^{H}D_{e}V in (7) is the Gram matrix^{1 }of D_{c}V, the rank r_{e }of A_{e }is the same as the rank of D_{e}V, which is min(δ(u,u′), L+1)≦L+1. Therefore, the diversity order of the system is
${G}_{d}=\underset{e\in {e}_{}}{\mathrm{min}}\mathrm{rank}\left({A}_{c}\right)=\underset{e\in {e}_{}}{\mathrm{min}}\mathrm{min}\left[\delta \left(u,{u}^{\prime}\right),L+1\right]=\mathrm{min}\left({\delta}_{\mathrm{min}},L+1\right)\le L+1,$
and the equality is achieved when δ_{min}≧L+1.

[0058]
Theorem 1 is intuitively reasonable because the FIR Rayleigh fading channel offers us L+1 independent fading taps, which is the maximum possible number of independent replicas of the transmitted signal in the serial transmission mode. In order to achieve the MADO, any two codewords in U would be different by no less than L+1 entries.

[0059]
The results in Theorem 1 can also be applied to GFcoded/interleaved OFDM systems and not across successive OFDM symbols. The diversity is again the minimum of the minimum Hamming distance of the code and L+1. To see this, it suffices to view U as the codeword set of GFcoded blocks.

[0060]
To achieve MADO, we need A_{e }to be full rank and thus positive definite for any eεS_{e}. This is true if and only if h^{H}A_{e}h>0 for any h≠0εC^{L+1}. Equation (7) shows that this is equivalent to d^{2}(y,y′)=∥D_{H}Θe∥^{2}≠0, ∀eεS_{e}, and ∀h≠0. The latter means that any two different transmitted vectors should result in different received vectors in the absence of noise, irrespective of the channel; in such cases, we call the symbols detectable or recoverable. The conditions for achieving MADO and channelirrespective symbol detectability are summarized in the following theorem:

[0000]
Theorem 2 (Symbol Detectability
MADO): Under the channel conditions of Theorem 1, the maximum diversity order is achieved if and only if symbol detectability is achieved, i.e., ∥D
_{H}Θe∥
^{2}≠0, ∀eεS
_{e }and ∀h≠0.

[0061]
The result in Theorem 2 is somewhat surprising: it asserts the equivalence of a deterministic property of the code, namely symbol detectability in the absence of noise, with a statistical property, the diversity order. It can be explained though, by realizing that in random channels, the performance is mostly affected by the worst channels, despite their small realization probability. By guaranteeing detectability for any, and therefore the worst, channels, we are essentially improving the ensemble performance.

[0062]
The symbol detectability condition in Theorem 2 should be checked against all pairs s and s′, which is usually not an easy task, especially when the underlying constellations are large and/or when the size K of s is large. But it is possible to identify sufficient conditions on Θ that guarantee symbol detectability and that are relatively easy to check. One such condition is provided by the following theorem.

[0063]
Theorem 3 (Sufficient Condition for MADO): For i.i.d. FIR Rayleigh fading channels of order L, MADO is achieved when rank(D_{H}Θ)=K, ∀h≠0, which is equivalent to the following condition: Any N−L rows of Θ span the C^{1×K }space. The latter in turn implies that N−L≧K.

[0064]
Proof: First of all, since Θ is of size N×K, it can not have rank greater than K. If MADO is not achieved, there exists at least one channel h and one eεS_{e }such that D_{H}Θe=0 by Theorem 2, which means that rank(D_{H}Θ)<K. So, MADO is achieved when D_{H}Θ=K. Secondly, since the diagonal entries of D_{H }represent frequency response of the channel h evaluated at the FFT frequencies, there can be at most L zeros on the diagonal of D_{H}. In order that rank(D_{H}Θ)=K, ∀h, it suffices to have any N−L rows of Θ span the C^{1×K }space. On the other hand, when there is a set of N−L rows of Θ that are linearly dependent, we can find a channel that has zeros at frequencies corresponding to the remaining L rows. Such a channel will make rank(D_{H}Θ)<K. This completes the proof.

[0065]
The natural question that arises at this point is whether there exist LE matrices Θ that satisfy the conditions of Theorem 3. The following theorem constructively shows two classes of encoders that satisfy Theorem 3 and thus achieve MADO.

[0000]
Theorem 4 (MADOachieving encoders):

[0000]
i) Vandermonde Encoders: Choose N points ρ_{n}εC, n−0, 1, . . . , N−1. such that ρ_{m≠p} _{n}, ∀m≠n. Let ρ:=[ρ_{0}, ρ_{1}, . . . , ρ_{N−1}]^{T}. Then the Vandermonde encoder Θ(ρ)εC^{N×K }defined by [Θ(ρ)]_{n,k}=ρ_{n} ^{k }satisfies Theorem 3 and thus achieves MADO.

[0066]
ii) Cosine Encoders: Choose N points ø_{0}, ø_{1}, . . . , ø_{N−1}εR, such that ø_{m}≠(2K+1)π and ø_{m}±ø_{n}≠2kπ, ∀m≠n, ∀kεZ. Let ø:=[ø_{0}, ø_{1}, . . . , ø_{N−1}]^{T}. Then the real cosine encoder Θ(ø)εR^{N×K }defined by
${\left[\Theta \left(\varphi \right)\right]}_{n,k}=\mathrm{cos}\left(k+\frac{1}{2}\right){\varphi}_{n}$
satisfies Theorem 3 and thus achieves MADO.

[0067]
Proof: We first prove that Vandermonde encoders in i) satisfy the conditions of Theorem 3. Any K rows of the matirx Θ(ρ) form a square Vandermonde matrix with distinct rows. Such a Vandermonde matrix is known to have a determinant different from 0. Therefore, and K rows of Θ(ρ)are linearly independent, which satisfies the conditions in Theorem 3.

[0068]
To prove Part ii) of the theorem, we show that any K rows of the encoding matrix form a nonsingular square matrix. Without loss of generality, we consider the matrix formed by the first K rows:
$\begin{array}{cc}{\Theta}_{1}:=\left[\begin{array}{cccc}\mathrm{cos}\left(\frac{1}{2}{\varphi}_{0}\right)& \mathrm{cos}\left(\frac{3}{2}{\varphi}_{0}\right)& \cdots & \mathrm{cos}\left(\frac{2K1}{2}{\varphi}_{0}\right)\\ \mathrm{cos}\left(\frac{1}{2}{\varphi}_{1}\right)& \mathrm{cos}\left(\frac{3}{2}{\varphi}_{1}\right)& \cdots & \mathrm{cos}\left(\frac{2K1}{2}{\varphi}_{1}\right)\\ \vdots & \vdots & \vdots & \vdots \\ \mathrm{cos}\left(\frac{1}{2}{\varphi}_{K1}\right)& \mathrm{cos}\left(\frac{3}{2}{\varphi}_{K1}\right)& \cdots & \mathrm{cos}\left(\frac{2K1}{2}{\varphi}_{K1}\right)\end{array}\right]& \left(11\right)\end{array}$

[0069]
Let us evaluate the determinant det(Θ_{1}). Define
${z}_{n}:=\mathrm{cos}\left(\frac{1}{2}{\varphi}_{n}\right).$
Using Chebyshev polynominals of the first kind T_{1}(x):=cos(l cos^{−1 }x)=Σ_{i=0} ^{[l/2]}(_{2i} ^{l})x^{1−2i}(x^{2}−1)^{1}, each entry
$\mathrm{cos}\left(\frac{2m+1}{2}{\varphi}_{n}\right)$
of Θ_{1 }is a polynominal T_{2m+1}(z_{n}) of order 2m+1 of some
${z}_{n}=\mathrm{cos}\left(\frac{1}{2}{\varphi}_{n}\right).$
The determinant det(Θ_{1}) is therefore a polynominal in z_{0}, . . . , z_{K−1 }of order Σ_{n=1} ^{K}(2n−1)=K^{2}. It is easy to see that when z_{n}=0, or when z_{m}=±z_{n}, m≠n, Θ_{1 }has an allzero row, or two rows that are either the same or the negative of each other. Therefore, z_{n}, z_{m}−z_{n}, and z_{m}+z_{n }are all factors of det(Θ_{1}). So, g(z_{0}, z_{1}, . . . , z_{K−1):=Π} _{n}z_{n}Π_{m>n}(z_{m} ^{2}−z_{n} ^{2}) is also a factor of det(Θ_{1}). But g(z_{0}, z_{1}, . . . , z_{K−1) is of order K+K(K−}1)=K^{2}, which means that it is different from det(Θ_{1}) by at most a constant. Using the leading coefficient^{4 }2^{l−1 }of T_{1}(x), we obtain the constant as Π_{n=1} ^{K}2^{2n−1−1}=2^{K(K−1)}; that is, det(Θ_{1})=2^{K(K−1)}g(z_{0}, z_{1}, . . . , z_{K−1). }

[0070]
Since ø_{m}≠(2k+1)π and ø_{m}±ø_{n}≠2kπ, ∀m≠n, ∀kεZ, none of z_{n}, z_{m}−z_{n}, and z_{m}+z_{n }can be zero. Therefore, det(Θ_{1})≠0 and Θ_{1 }is non0singular. A similar argument can be applied to any K rows of the matrix, and the proof is complete.

[0071]
Notice that up to now we have been assuming that the channel consists of i.i.d. zeromean complex Gaussian taps. Such a model is well suited for studying average system performance in wireless fading channels, but is rather restrictive since the taps may be correlated. For correlated channels, we have the following result.
Theorem 5 (MADO of Correlated Rayleigh Channels): Let the channel h be zeromean complex Gaussian with correlation matrix R_{h}. the maximum achievable diversity order equals the rank of R_{h}, which is achieved by any encoder that achieves MADO with i.i.d. Rayleigh channels. If R_{h }is full rank and MADO is achieved, then the coding advantage is different from the coding advantage in the i.i.d. case only by a constant det
$\mathrm{det}\frac{1}{L+1}\left({R}_{h}\right)/{\alpha}_{L}.$

[0072]
Proof: Let r_{h}:=rank(R_{h}) and the eigenvalue decomposition of R_{h }be
$\begin{array}{cc}{R}_{h}=\left[{U}_{1}{U}_{2}\right]\left[\begin{array}{cc}{\Lambda}_{1}& 0\\ 0& {\Lambda}_{2}\end{array}\right]\left[\begin{array}{c}{U}_{1}^{H}\\ {U}_{2}^{H}\end{array}\right].& \left(12\right)\end{array}$
where U_{1 }is (L+1)×r_{h}, U_{2 }is (L+1 )×(L+1−r_{h}), Λ1 is r_{h}×r_{h }full rank diagonal, and Λ_{2 }is an (L+1−r_{h})×(L+1−r_{h}) allzero matrix. Define
${\stackrel{\sim}{h}}_{1}:={\Lambda}_{1}^{\frac{1}{2}}{U}_{1}^{H}h,$
{tilde over (h)}_{2}:=U_{2} ^{H}h, and {tilde over (h)}:=[{tilde over (h)}_{1} ^{T }{tilde over (h)}_{2} ^{T}]^{T}, where
${\Lambda}_{1}^{\frac{1}{2}}$
is defined by
${\Lambda}_{1}^{\frac{1}{2}}{\Lambda}_{1}^{\frac{1}{2}}={\Lambda}_{1}^{1}.$
Since {tilde over (h)}_{2 }has an autocorrelation matrix R_{{tilde over (h)}} _{ 2 }=U_{2} ^{H}R_{h}U_{2}=Λ_{2}, all the entries of {tilde over (h)} are zero almost surely. We can therefore
write
$\begin{array}{cc}h=\left[{U}_{1}{\Lambda}_{1}^{\frac{1}{2}}{U}_{2}\right]\stackrel{\sim}{h}={U}_{1}{\Lambda}_{1}^{\frac{1}{2}}{\stackrel{\sim}{h}}_{1}.& \left(13\right)\end{array}$
Since
${R}_{{\stackrel{\sim}{h}}_{1}}={\Lambda}_{1}^{\frac{1}{2}}{U}_{1}^{H}{R}_{h}{U}_{1}{\Lambda}_{1}^{\frac{1}{2}}={I}_{{r}_{h}},$
the entries of {tilde over (h)}_{1}, which are jointly Gaussian, are i.i.d.

[0073]
Substituting (13) in (7), we obtain
$\begin{array}{cc}{d}^{2}\left(y,{y}^{l}\right)={h}^{H}{A}_{\varepsilon}h={\stackrel{\sim}{h}}_{1}^{H}{\Lambda}_{1}^{\frac{1}{2}}{U}_{1}^{H}{A}_{e}{U}_{1}{\Lambda}_{1}^{\frac{1}{2}}{\stackrel{\sim}{h}}_{1}:={\stackrel{\sim}{h}}_{1}^{H}{\stackrel{\sim}{A}}_{e}{\stackrel{\sim}{h}}_{1},& \left(14\right)\end{array}$
where
${\stackrel{\sim}{A}}_{e}={\Lambda}_{1}^{\frac{1}{2}}{U}_{1}^{H}{A}_{e}{U}_{1}{\Lambda}_{1}^{\frac{1}{2}}$
is an r_{h}×r_{h }matrix.

[0074]
Following the same derivation as in (7)(10), with A_{e }replaced by Ã_{e }and h replaced by {tilde over (h)}_{1}; we can obtain the diversity order and coding advantage for error event e as
$\begin{array}{cc}{G}_{d,e}=\mathrm{rank}\left({\stackrel{\sim}{A}}_{e}\right):={\stackrel{\sim}{r}}_{e}\le {r}_{h}\text{\hspace{1em}}\mathrm{and}\text{\hspace{1em}}{G}_{c,e}={\left(\prod _{l=0}^{{\stackrel{\sim}{r}}_{e}1}\text{\hspace{1em}}{\stackrel{\sim}{\lambda}}_{e,l}\right)}^{1/{\stackrel{\sim}{r}}_{e}},& \left(15\right)\end{array}$
where {tilde over (λ)}_{e,l}l=1, . . . , r_{h}, are the eigenvalues of Ã_{e}.

[0075]
When Θ is designed such that MADO is achieved with i.i.d. channels, A_{e }is full rank for any eεS_{e}. Then A_{e }is positive definite Hermitian symmetric, which means that there exists an (L+1)×(L+1) matrix B_{e }such that A_{e}=B_{e} ^{H}B_{e}. It follows that
${\stackrel{\sim}{A}}_{e}={\Lambda}_{1}^{\frac{1}{2}}{U}_{1}^{H}{B}_{e}^{H}{B}_{e}{U}_{1}{\Lambda}_{1}^{\frac{1}{2}}$
is the Gram matrix of
${B}_{e}{U}_{1}{\Lambda}_{1}^{\frac{1}{2}},$
and thus A_{e }has rank equal to rank
$\left({B}_{e}{U}_{1}{\Lambda}_{1}^{\frac{1}{2}}\right)=\mathrm{rank}\left({U}_{1}{\Lambda}_{1}^{\frac{1}{2}}\right)={r}_{h},$
the MADO for this correlated channel.

[0076]
When the MADO r_{h }is achieved, the coding advantage in (15) for e becomes G_{c,e}=det(Ã_{e})^{1/r} ^{ h }. If in addition R_{h }has full rank r_{h}=L+1, then det(Ã_{e})^{1/r} ^{ h }=det(A_{e})^{1/(L+1)}det(R_{h})^{1/(L+1)}, which means that in the fullrank correlated channel case, the fulldiversity coding advantae is different from the coding advantage in the i.i.d. case only by a constant det(R_{h})^{1/(L+1)}/α_{L}.

[0077]
Theorem 5 asserts that the rank(R_{h}) is the MADO for LEOFDM systems as well as for coded OFDM systems that do not code or interleave across OFDM symbols. Also, MADOachieving transmission through i.i.d. channels can achieve the MADO for correlated channels as well.

[0078]
Coding advantage G_{e }is another parameter that needs to be optimized among the MADOachieving encoders. Since for MADOachieving encoders, coding advantage is given by G_{e}=min_{e≠0}G_{e,e}=α_{L}min_{e≠0}det(A_{e}), we need to maximise the minimum determinant of A_{e }over all possible error sequences e, among the MADOachieving encoders.

[0079]
The following theorem asserts that ZPonly transmission is one of the coding advantage maximizers.

[0080]
Theorem 6 (ZPonly: maximUm coding advantage): Suppose the entries of s(i) are drawn independently from a finite constellation A with minimum distance d_{min}(A). Then the maximum coding advantage of an LEOFDM for i.i.d. Rayleigh fading channels under as1) is G_{e,max}=α_{L}d_{min} ^{2}(A). The maximum coding advantage is achieved by ZPonly transmissions with any K.

[0081]
In order to achieve high rate, we have adopted K=N−L and found two special classes of encoders that can achieve MADO in Theorem 4. The Vandermonde encoders are reminiscent of the parity check matrix of BCH codes, ReedSolomon (RS) codes, and Goppa codes. It turns out that the MADOachieving encoders and these codes are closely related.

[0082]
Let us now take S=C^{K×1}. We call the codeword set U that is generated by Θ of size N×K Maximum Distance Separable (MDS) if δ_{min}(U)=N−K+1. The fact that N−K+1 is the maximum possible minimum Hamming distance of U is due to the Singleton bound. Although the Singleton bound was originally proposed and mostly known for Galois field codes, its proof can be easily generalized to real/complex field as well. In our case, it asserts that δ_{min}≦N−K+1 when S=C^{K×}1.

[0083]
Notice that the assumption S=C^{K×1 }is usually not true in practice, because the entries of S are usually chosen from a finitealphabet set, e.g., QPSK or QAM. But such an assumption greatly simplifies the system design task: once we can guarantee δ_{min}=N−K+1 for S=C^{K×1}, we can choose any constellation from other considerations without worrying about the diversity performance. However, for a finite constellation, i.e., when S has finite cardinality, the result on δ_{min }can be improved. In fact, it can be shown that even with a square and unitary K×K matrix Θ, it is possible to have δ_{min}=K.

[0084]
To satisfy the condition in Theorem 2 with the highest rate for a given N, we need K=N−L, and δ_{min}=L+1=N−K+1. In other words, to achieve constellationirrespective fulldiversity with highest rate, we need the code to be MDS. According to our Theorem 4; such MDS encoders always exist for any N and K<N.

[0085]
In the GF, there also exist MDS codes. Examples of GF MDS codes include singleparitycheck coding, repetition coding, generalized RS coding, extended RS coding, doubly extended RS coding, algebraicgeometry codes constructed using an elliptic curve.

[0086]
When a GF MDS code exists, we may use it to replace our CF linear code, and achieve the same (maximum) diversity order at the same rate. But such GF codes do not always exist for a given field and N, K. For F_{2}, only trivial MDS codes exist. This means that it is impossible to construct, for example, binary (and thus simply decodeable) MDS codes that have δ_{min}≧2, except for the repitition code. One other restriction of the GF MDS code is on the input and output alphabet. Although ReedSolomon codes are the least restictive among them in terms of the number of elements in the field, they are constrained on the code length and the alphabet size. Our linear encoders Θ, on the other hand, operate over the complex field with no restiction on the input symbol alphabet or the coded symbol alphabet.

[0000]
We obtain analogous results on our complex field MDS codes for achieving MADO to known results for GF MDS codes.

[0000]
Theorem 7 (Dual MDS codes): For an MDS code generated by ΘεC^{N×K}, the code generated by the matrix Θ_{⊥} is also MDS, where Θ_{⊥ }is an N×(N−K) matrix such that Θ_{⊥} ^{T}Θ=0.

[0087]
A generator Θ for an MDS code is called systematic if it is in the form [I_{K}, P]^{T }where P is a K×(N−K) matrix.

[0000]
Theorem 8 (Systematic MDS code): A code generated by [I, P]^{T }is MDS if and only if every square submatrix of P is nonsingular.

[0088]
To construct systematic MDS codes using Theorem 8, the following two results can be useful:

 i) Every square submatrix of a Vandermonde matrix with real, positive entries is nonsingular.
 ii) A K×(N−K) matrix P is called a cauchy matrix if its (i, j)th element [P]_{i,j}=1/(x_{i}+y_{j}) for some elements x_{1}, x_{2}, . . . , x_{K}, y_{1}, y_{2}, . . . , y_{N−K}, such that the x_{i}'s are distinct, the y_{j}'s are distinct, and x_{i}+y_{j}≠0 for all i,j. Any square submatrix of a cauchy matirx is nonsingular.

[0091]
Next, we discus decoding options for our CF code. For this purpose, we restrict our attention to the case that S is a finite set, e.g., a finite constellation carved from (possible scaled and shifted) Z^{K}. This includes BPSK, QPSK, and QAM as special cases. Since the task of the receiver involves both channel equalization and decoding of the CF linear code, we will consider the combined task jointly and will use the words decoding, detection, and equalization interchangeably.

[0000]
Maximum Likelihood Detection

[0092]
To achieve MADO, LEOFDM requires ML decoding. For the input output relationship in (2) and under the AWGN assumption, the minimumdistance detection rule becomes ML and can be formulated as follows:
$\begin{array}{cc}\hat{s}=\underset{s\in \mathcal{S}}{\mathrm{arg}\text{\hspace{1em}}\mathrm{min}}\uf605x{D}_{H}\mathrm{\Theta s}\uf606.& \left(16\right)\end{array}$

[0093]
ML decoding of LE transmissions belongs to a general class of lattice decoding problems, as the matrix product D_{H}Θ in (2) gives rise to a discrete subgroup (lattice) of the C^{N }space under the vector addition operation. In its most general form, finding the optimum estimate in (16) requires searching over S vectors. For large block sizes and/or large constellations, it is practically impossible to perform exhaustive search since the complexity depends exponentially on the number of symbols in the block.

[0094]
A relatively less complex ML search is possible with the sphere decoding (SD) algorithm (c.f., FIG. 5), which only searches coded vectors that are within a sphere centered at the received symbol x (c.f., (2)). Denote the QR decomposition of D_{H}Θ as D_{H}Θ=QR,
where Q has size N×K and satisfies Q^{H}Q=I_{K×K}, and R is an upper triangular K×K matrix. The problem in (16) then converts to the following equivalent problem
$\begin{array}{cc}\hat{s}=\underset{s\in \mathcal{S}}{\mathrm{arg}\text{\hspace{1em}}\mathrm{min}}\uf605{Q}^{H}x\mathrm{Rs}\uf606,& \left(17\right)\end{array}$
SD starts its search by looking only at vectors s such that
∥Q ^{H} x−Rs∥<C, (18)
where C is the search radius, a decoding parameter. Since R is upper triangular, in order to satisfy the inequality in (18), the last entry of s must satisfy [R]_{K,K}[s]_{K}<C, which reduces the search space if C is small. For one possible value of the last entry, possible candidates of the lastbutone entry are found and one candidate is taken. The process continues until a vector of s_{0 }is found that satisfies (18). Then the search radius C is set equal to ∥Q^{H}x−Rs_{0}∥ and a new search round is started. If no other vector is found inside the radius, then s_{0 }is the ML solution. Otherwise, if s_{1 }is found inside the sphere, the search radius is again reduced to ∥Q^{H}x−Rs_{1}∥, and so on. If no s_{0 }is ever found inside the initial sphere of radius C, the C is too small. In this case, either a decoding failure is declared or C is increased.

[0095]
The complexity of the SD is polynomial in K, which is better than exponential but still too high for practical purposes. Indeed, it is not suitable for codes of block size greater than, say 16. When the block size is small, the sphere decoder can be considered as an option to achieve the ML performance at anageable complexity.

[0096]
In the special case of ZPonly transmissions, the received vector is given by {tilde over (x)}=Hs+{tilde over (η)}. Thanks to the zeropadding, the full convolution of the transmitted block s with the FIR channel is preserved and the channel is represented as the banded Toeplitz matrix H. In such a case, Viterbi decoding can be used at a complexity of O(Q^{L}) per symbol, where Q is the constellation size of the symbols in s.

[0000]
LowComplexity Linear Detection

[0097]
Zeroforcing (ZF) and MMSE detectors (equalizers) offer lowcomplexity alternatives. The ZF and MMSE equalizers based on the inputoutput relationship (2) can be written as:
G ^{zf}=(D _{H}Θ)^{† }and G ^{mmse} =R _{s}Θ^{H} D _{H} ^{H}(σ_{η} ^{2} I _{N} +D _{H} ΘR _{s}Θ^{H} D _{H} ^{H})^{−1},
Respectively, where (•)^{554 }denotes pseudoinverse, σ_{η} ^{2 }is the variance of entries of noise η, and R_{s }is the autocorrelation matrix of s. Given the ZF and MMSE equalizers, they each require O(N×K) operations per K symbols. So per symbol, they require only O(N) operations. To obtain the ZF of MMSE equalizers, inversion of a N×N matrix is involved, which has complexity O(N^{3}). However, the equalizers only needs to be recomputed when the channel changes.
DecisionDirected Detection

[0098]
The ML detection schemes in general have high complexity, while the linear detectors may have decreased performance. The class of decisiondirected detectors lies between these categories, both in terms of complexity and in terms of performance.

[0099]
Decisiondirected detectors capitalize on the finite alphabet property that is almost always available in practice. In the equalization scenario, they are more commonly known as Decision Feedback Equalizers (DFE). In a singleuser block formulation, the DFE has a structure as shown in FIG. 6, where the feedforward filter is represented as a matrix W and the feedback filter is presented as B. Since we can only feed back decisions in a casual fashion, B is usually chosen to be a strictly upper or lower triangular matrix with zero diagonal entries. Although the feedback loop is represented as a matrix, the operations happen in a serial fashion: the estimated symbols are fed back serially as their decisions are formed one by one. The matrices W and B can be designed according to ZF or MMSE criteria. When B is chosen to be triangular and the MSE between the block estimate before the decision device is minimized, the feedforward and feedback filtering matrices can be found from the following equations:
R _{s} ^{−1}+Θ^{H} D _{H} ^{H} R _{η} ^{−1} D _{H} Θ=U ^{H} ΛU, (19)
W=UR _{s}Θ^{H} D _{H} ^{H}(R _{η+D} _{H} ΘR _{s}Θ^{H} D _{H} ^{H})^{−1} B=V−I, (20)
where the R's denote autocorrelations matrices, (19) was obtained using Cholesky decomposition, and U is an upper triangular matrix with unit diagonal entries, Since the feedforward and feedback filtering entails only matrixvector multiplications, the complexity of such decision directed schemes is comarable to that of linear detectors. Because decision directed schemes captialize on the finitealphabet property of the information symbols, the performance is usually (much) better than linear detectors.

[0100]
As an example, we list in the following table the approximate number of flops needed for different decoding schemes when K=14, L=2, N=16, and BPSK modulation i: deployed; i.e., S={±1}
^{K}.
 TABLE 1 
 
 
 Decoding Scheme  order of Flops/symbol 
 
 Exhaustive ML  >2^{K }= 2^{14 }= 16.384 
 Sphere Decoding  ≈800 (empirial) 
 ZF/MMSE  ≈ N = 16 
 DecisionDirected  ≈ N = 16 
 Viterbi for ZPonly  2^{L }= 2^{2 }= 4 
 
Iterative Detectors

[0101]
Other possible decoding methods include iterative detectors, such as successive interference cancellation with iterative least squares (SICILS), and multistage cancellations. These methods are similar to the illustrated DFE in the interference from symbols that are decided in a block is canceled before a decision on the current symbol is made. In SICILS, least squares is used as the optimization criterion and at each step or iteration, the cost function (leastsquares) will decrease or remain the same. In multistage cancellation, the MMSE criterion is often used such that MF is optimum after the interference is removed (supposing that the noise is white). The difference between a multistage cancellation scheme and the block DFE is that the DFE symbol decisions are made serially; and for each undecided symbol, only interference from symbols that have been decided is cancelled; while in multistage cancellation, all symbols are decided simultaneously and then their mutual interferences are removed in a parallel fashion.

[0102]
As illustrated in FIG. 7, another embodiment may utilize for LEOFDM equalization an iterative “sumproduct” decoding algorithm, which is also used in Turbo decoding. In particular, the coded system is represented using a factor graph, which describes the interdependence of the encoder input, the encode output, and the noisecorrupted coded symbols.

[0103]
As a simple example, suppose the encoder takes a block of 3 symbols s:=[s_{0}, s_{1}, s_{2}]^{T }as input and linearly encodes them by a 4×3 matrix Θ to produce the coded symbols u:=[u_{0}, u_{1}, u_{2}, u_{3}]. After passing through the channel (OFDM modulation/demodulation), we obtain the channel output x_{i}=H(ε^{j2πi/4})u_{i}, i=0,1,2,3. The factor graph for such a coded system is shown in FIG. 7, where the LE is represented by linear constraints between the LE input symbols s and the LE output symbols u.

[0000]
Parallel Encoding for Low Complexity Decoding

[0104]
When the number of carriers N is very large (e.g., 1,024), it is desirable to keep the decoding complexity manageable. To achieve this we can split the ecoder into several smaller encoders. Specifically, we can choose Θ=PΘ′, where P is a permutation matrix that interleaves the subcarriers, and Θ′ is a block diagonal matirx: Θ′=diag(Θ_{0}, Θ_{1}, . . . , ΘM−1). This is a essentially a form of coding for interleaved OFDM, except that the coding is done in complex domain here. The matrices Θ_{m}, m=0, . . . , M−1 are of smaller size than Θ and all of them can even be chosen to be identical. With such designed Θ, decoding s from the noisy D_{H}Θs is equivalent to decoding M coded subvectors of smaller sizes and therefore the overall decoding complexity can be reduced considerably. Such a decomposition is particularly important when a high complexity decoder such as the sphere decoder is to be deployed.

[0105]
The price paid for low decoding complexity is decrease in transmission rate. When such parallel encoding is used, we should make sure that each of the Θ_{m }matrices can guarantee full diversity, which requires Θ_{m }to have L redundant rows. The overall Θ will then have ML redundant rows, which correspoonds to an Mfold increase of the redundancy of a full single endocer of size N×K. If a fixed constellation is used for entries in s, then square Θ_{m}'s can be used, which does not lead to loss of efficiency.

[0106]
FIGS. 810 are graphs that illustrate exemplary results of simulations of the described techniques. In the illustrated results, we compare the proposed wireless communication techniques with existing coded OFDM systems that deploy existing GF block codes and convolutional codes. In all cases, BPSK constellation is used, and in Test Case 2 and 3, the binary encoded symbols are mapped to ±1's before OFDM modulation.

[0107]
Test case 1 (Decoding of LEOFDM): We first test the performance of differrent decoding algorithms. The LEOFDM system ahs parameters K=14, N=16, L=2. The channel is i.i.d. Rayleigh and BER's for 200 randomchannel realizations according to As1) are averaged. FIG. 8 shows the performance of ZF, MMSE, DFE, and sphere decoding (ML) for LEOFDM. We notice that at BER of 10^{−4 }DFE performs about 2 dB better than the MMSE detectors, while at the same time it is only less than 1 dB inferior to the sphere decoder, which virtually achieves the ML decoding performance. The complexity of ZF, MMSE, DFE is all about N=16 flops per symbol, which is much less than the sphere decoding algorith, which empirically needs about 800 flops per symbol in this case.

[0108]
Test case 2 (Comparing LEOFDM with BCHcoded OFDM): For demonstration and verification purposes, we first compare LEOFDM with coded OFDM that relies on GF block coding. The channel is modeled as FIR with 5 i.i.d. Rayleigh distributed taps. In FIG. 9, we illustrate Bit Error Rate (BER) performance of CF coded OFDM with Vandermonde code of Theorem 4, and that of binary BCHcoded OFDM. The system parameters are k=26, N=31. The generating polynomial fo the BCH code is g(D)=1+D^{2}+D^{5}. Since we can view this BCH as a rate 1 convolutional code with the same generator and with termination after 26 information symbols (i.e., the code ends at the allzero state), we can use the Viterbi algorithm for softdecision ML BCH decoding. For LEOFDM, since the transmission is essentially a ZPonly signlecarrier scheme, the Viterbi algoritym is also applicable for ML decoding.

[0109]
Since the binary (26.31) BCH code has minimum Hamming distance 3, it possesses a diversity order of 3, which is only half of the maximum possible (L+1=6) that LEOFDM achieves with the same spectal efficiency. This explains the differnece in their performance. We can see that when the optimum ML decoder is adopted by both receivers, LEOFDM outperforms coded OFDM with BCH coding considerably. The slopes of the corresponding BER curves also confirm our theoretical results.

[0110]
Test case 3 (Comparing LEOFDM with convolutionally coded OFDM): In this test, we compare (See FIG. 10) our LEOFDM system with convolutionally coded OFDM (with a rate ½ code punctured to rate ¾ followed by interleaving) that is deployed by the HiperLAN2 standard over the channels used in Test Case 2. The rate ½ mother code has its generator in octal form as (133, 171), and there are 64 states in its trellis. Every 3rd bit from the first branch and every second bit from the second branch of the mother code are punctured to obtain the rate ¾ code, which results in a code whose weight emmuerating function is 8W^{5}+31W^{6}+160W^{7}+. . . . So the free distance is 5, which means that the achieved diversity is 5, less than the diversity order 6 acheived by LEOFDM.

[0111]
The parameters are K=36, N=48. We use two parallel truncated DCT encoders; that is, Θ=I_{2×2}{circle around (×)}Θ_{0}, where {circle around (×)} denotes Kronecker product, and Θ_{0 }is a 24×18 encoder obtained by taking the first 18 columns of a 24×24 DCT matrix. With ML decoding, LEOFDM performs about 2 dB better than convolutionally coded OFDM. From the ML performance curves in FIG. 10, LEOFDM seems to achieve a larger coding advantage than the punctured convolutional code we used.

[0112]
Surprisingly, even with linear MMSE equalization, the performance of LEOFDM is better than coded OFDM for SNR values less than 12 dB. The complexity of ML decoding for LEOFDM is quite high—in the order of 1,000 flops per symbol. But the ZF and MMSE decoders have comparable or even lower complexity than the Viterbi decoder for the convolutional code.

[0113]
The complexity of LEOFDM can be dramatically reduced using the parallel encoding method with square encoders. It is also possible to combine CF coding with conventional GF coding, in which case only small square encoders of size 2×2 or 4×4 are necessary to achieve near optimum performance.

[0114]
Various embodiments of the invention have been described. The described techniques can be embodied in a variety of receivers and transmitters including base stations, cell phones, laptop computers, handheld computing devices, personal digital assistants (PDA's), and the like. The devices may include a digital signal processor (DSP), field programmable gate array (FPGA), application specific integrated circuit (ASIC) or similar hardware, firmware and/or software for implementing the techniques. If implemented in software, a computer readable medium may store computer readable instructions, i.e., program code, that can be executed by a processor or DSP to carry out one of more of the techniques described above. For example, the computer readable medium may comprise random access memory (RAM), readonly memory (ROM), nonvolatile random access memory (NVRAM), electrically erasable programmable readonly memory (EEPROM), flash memory, or the like. The computer readable medium may comprise computer readable instructions that when executed in a wireless communication device, cause the wireless communication device to carry out one or more of the techniques described herein. These and other embodiments are within the scope of the following claims.