Publication number | USRE43231 E1 |
Publication type | Grant |
Application number | US 11/798,175 |
Publication date | Mar 6, 2012 |
Filing date | May 10, 2007 |
Priority date | Mar 27, 2000 |
Fee status | Paid |
Also published as | US6892343, US20020016945 |
Publication number | 11798175, 798175, US RE43231 E1, US RE43231E1, US-E1-RE43231, USRE43231 E1, USRE43231E1 |
Inventors | Khalid Sayood, Michael W. Hoffman, Billy D. Pettijohn |
Original Assignee | Board Of Regents Of The University Of Nebraska |
Export Citation | BiBTeX, EndNote, RefMan |
Patent Citations (28), Non-Patent Citations (24), Classifications (23), Legal Events (2) | |
External Links: USPTO, USPTO Assignment, Espacenet | |
This Application is a CIPThis is a reissue application of U.S. Pat. No. 6,892,343 B2 issued May 10, 2005, U.S. Pat. No. 6,892,343 B2 issued from U.S. application Ser. No. 09/816,398 filed Mar. 24, 2001, which claims the benefit under 35 U.SC. §119(e) of provisional application Ser. No. 60/192,215 filed on Mar. 27, 2000.
The present invention relates to source symbol encoding, decoding and error correction capability in the context of noisy channels in electronic communication systems. More particularly the prefered present invention is a system and method for joint source-channel encoding and variable length symbol decoding with error correction, comprising arithmetic encoder and combination sequential and arithmetic encoded symbol, decoder means.
BACKGROUND
With the increasing popularity of mobile communications there has come renewed interest in joint source-channel coding. The reason is that shared mobile communications channels are restrictive in terms of bandwidth and suffer from such as fading and interference etc., thus making some form of error protection essential, particularly where variable length codes are used. Further, it is well known that standard approaches to error correction are expensive, in terms of required bandwidth, hence there exists a need for systems and methodology which can provide efficient source and channel encoding and symbol decoding with error correction. Viable candidates include a joint source-channel encoding system and methodology which utilizes characteristics of a source, or source encoder, to provide error protection.
As background, it is noted that one of the earliest works that examined the effect of errors on variable length codes was that of Maxted and Robinson in an article titled “Error Recovery for Variable Length Codes”, IEEE Trans. on Information Theory, IT-31, p. 794-801, (November 1985). Corrections and additions to said work were provided by Monaco and Lawlor in “Error Recovery for Variable Length Codes”, IEEE Trans. on Information Theory, IT-33, p. 454-456, (May 1987). And said work was later extended by Soualhi et al. in “Simplified Expression for the Expected Error Span Recovery for Variable Length Codes”, Intl. J. of Electronics, 75, p. 811-816, (November 1989), and by Rahman et al. in “Effects of a Binary Symetric Channel on the Synchronization Recovery of Variable length Codes”, Computer J., 32, p. 246-251, (January 1989); as well as by Takishima et al. in “Error States and Synchronization Recovery for Variable Length Codes”, IEEE Trans. on Communications, 42, p. 783-792; as well as by Swaszek et al. in “More on the Error Recovery for Variable Length Codes”, IEEE Trans. on Information Theory, IT-41, p. 2064-2071, (November 1995)., all of which focused mainly on the resynchronization ability of Huffman Codes.
In terms of joint source channel coding where the source and source encoder characteristics are used to provide error protection, one of the earliest works which incorporated variable length codes was that of Sayood, Liu and Gibson in “Implementation Issues in MAP Joint Source/Channel Coding”, Proc. 22nd Annular Asilomar Conf. on Circuits, Systems, and Computers, p. 102-106, IEEE, (November 1988). Assuming a Markov model for the source encoder output they used packetization to prevent error propagation and the residual redundancy at the source encoder output to provide error protection. This approach is used by Park and Miller who have developed a bit constrained decoder specifically for use with variable codes, (see “Decoding Entropy-Coded Symbols Over Noisy Channels by MAP Sequency Estimation for Asynchronous HMMs”, Proc. Conference on Information Sciences and Systems, IEEE, (March 1999). Murad and Fuja, in “Robust Transmissions of Variable-Length Encoded Sources”, Proc. IEEE Wireless and Networking Conf. 1999, (September 1999); and Sayood, Otu and Demir in “Joint Source/Channel Coding for Variable Length Codes”, IEEE Transactions on Communications, 48:787-794, (May 2000), describe designs which make use of the redundancy at the source coder output for error correction.
The problem of low bandwidth hostile channels can also be addressed using error resilent source codes which incorporate the possibility of errors in the channel and provide mechanisms for error concealment. Work in the area includes that of Yang. et al. as reported in “Robust Image Compression Based on Self-Synchronizing Huffman Code and Inter-Subband Dependency”, Proc. thirty-second Asilomar Conference on Signals, Systems and Computers, p. 986-972 (November 1997), who use the self-synchronizing property of suffix rich Huffman codes to limit error propagation, and correlation between subbands to provide error correction/concealment.
In addition, there exist a number of concatenated schemes in which the source and channel encoders are concatenated in the traditional manner with channel resources allocated between them based on the characteristics of the channel. If the channel is very noisy, more bits are allocated to the channel and fewer to source encoding, and the situation is reversed when the channel conditions are more favorable. Examples of this approach include the work of Regunathan et al. as presented in an article titled “Robust Image Compression for Time Varying Channels”, Proc. Thirty-first Asilomar Conf. on Signals, Systems and Computers, p. 968-972, (November 1997) and in an article titled “Progressive Image Coding for Noisy Channels”, by Sherwood et al., IEEE Signal Processing Lett., 4 p. 189-191, (July 1997).
Most of the schemes referenced above use Huffman coding or variants thereof as the variable length coding scheme, however, with the increasing popularity of arithmetic coding, there has developed interest in joint source channel coding schemes which use said arithmetic coding. One such approach is described in “Arithmetic Coding Algorithm with Embedded Channel Coding”, ElMasry, Electronics Lett., 33 p. 1687-1688, (September 1997); and another is described in “Integrating Error Detection into Arithmetic Coding”, Boyd et al., IEEE Transactions on Communications, 45(1), p. 1-3, (January 1997). The ElMasry approach involves generation of parity bits which are embedded into arithmetic coding procedure for error correction. The Boyd approach showed that by reserving probability space for a symbol which is not in the source alphabet the arithmetic code can be used for detecting errors. Reserving probability space for a symbol that will never be generated means that less space remains for the source alphabet and this translates into a higher coding rate. Said overhead, however, is small considering the capability of error detection enabled, as described by Kozintsev et al. in “Image Transmission Using Arithmetic Coding Based on Continuous Error Detection”, Proc. of Data Compression Conf. p. 339-348, IEEE Computer Society Press, (1998) regarding two scenarios, (eg. Automatic Repeat Request (ARQ) based communications and serially concatenated coding schemes with an inner error correction code and an outer error detection code), which use error detecting capability of the arithmetic code with an error detection space.
With an eye to the present invention a Key-word Search for relevant Patents which involve inner and outer coding, trellis coding, data compression, error detection, error correction, variable length coding, arithmetic coding, and data transmission over noisy channels, has provided:
No known reference or combination of references, however, discloses use of a joint source-channel encoding, symbol decoding and error correction system comprising encoder means, modulation-transmission means, and combination sequential, and encoded symbol, decoding means; wherein errors detected by the encoded symbol decoding means are corrected by methodlogy involving the changing of bistable elements in said sequential decoder means, or selection of a series of sequential bits from a plurality of said serieses of sequential bits which result from changing bistable elements in said sequential decoder means, particularly where said encoder means is an arithmetic encoder and encoded symbol decoding means comprises arithmetic decoder, and encoded symbols are of variable length.
The present invention can be characterized as a system and method involving a concatenated scheme in which the functional roles of both:
The present invention system can be described as a variable symbol length, joint source-channel encoding, symbol decoding and error correction system comprising:
Said encoder means can optionally further comprise means for generating, and in a sequence expected by said encoded symbol decoding means, outputting an encoded sequence of bits for at least one reserved symbol before and/or after an encoded
Continuing, where the encoder means and encoded symbol decoder means are arithmetic, the present invention joint source-channel encoding, decoding and error correction system can be described as comprising:
Where the encoder means and encoded symbol decoder means are arithmetic, the present invention joint source-channel encoding, decoding and error correction system can be more precisely described as comprising:
A method of practicing the present invention, assuming the presence of an arithmetic encoder and arithmetic encoded symbol decoding system, can be recited as:
It is noted that the arithmetic encoder means and decoding means, which comprises a sequential decoder means and an arithmetic decoder means, can be any electronic systems which perform the indicated function.
It is felt beneficial to provide insight to a specific error correction procedure which can be performed by the present invention. Again, a present invention joint source-channel encoding system can be considered to be sequentially comprised of:
A method of correcting errors in decoded symbols which are encoded by an arithmetic encoder in joint source-channel coding system, comprises the steps of:
Said method of error correction can involve step d. being practiced more than once, with said error correcting method further comprising the step of:
Further, said error correction method can further comprise the step of:
Alternatively, said error correcting method can involve practice of step d. more than once, with said error correcting method further comprising the step of:
Again, the error detection method, in step d., involves the determination of the presence or absence of non-alphabet, (ie. reserved), symbols other than as expected, said non-alphabet symbols being not-allowed as arithmetic encoder input symbols.
The just described approach to correcting errors requires that “branch-points” in the sequential decoder means be determined based upon a “null-zone” criteria, and involves retracking the contents of the sequential decoder means, and selectively changing an identified “1”/(“0”) to “0”/(“1”), when an error is identified. It is possible, however, to identify bistable elements in said sequential decoder means and define them as fixed branch points, based upon the modulation technique utilized. For instance, if a trellis coded modulation scheme is utilized, a well known 8-PSK Constellation Codeword Assignment approach can be practiced. When such an approach to correcting errors in decoded symbols which are encoded by an arithmetic encoder in joint source-channel coding system is utilized, the method thereof can be described as comprising the steps of:
Said method of correcting errors in decoded symbols can, in step e., involve determining which series of sequential bits in said produced plurality of series of sequential bits is most likely correct based on applying at least on selection from the group consisting of:
Finally, it is specifically noted that, while not limiting, it is believed that Patentability is definitely established where the present invention system is comprised of an arithmetic encoder means, in combination with a decoding means which is comprised of a functional combination of a sequential decoder means and an arithmetic decoder means, wherein in use, error correction methodology is initiated upon the detecting, by the arithmetic decoder means, of a non-expected encoded reserved symbol, or the absence of an expected encoded reserved symbol sequentially inserted with encoded allowed symbols by the arithmetic encoder means. It is also noted that no arithmetic encoder means is known which provides operational error detection space. Computer simulation thereof, and of sequential and arithmetic decoder means then serve as example systems.
The present invention will be better understood by reference to the Detailed Description Section in combination with the Drawings.
It is therefore a primary purpose and/or objective of the present invention to provide a system comprising an outer symbol encoder means which comprises operational error detection space, and a combination sequential, and encoded symbol, decoding means, wherein said outer encoder means is preferably an arithmetic encoder, and the encoded symbol, decoding means is preferably an arithmetic decoder.
It is another purpose of the present invention to disclose use of reserved symbols as means to enable encoded symbol, decoding means, (arithmetic decoder), to identify errors, said identified errors being corrected by the changing of at least one bit is an associated sequential decoder means.
It is another purpose yet of the present invention to teach that error detection by an arithmetic decoder means can be based on detecting the presence of an unexpected encoded symbol or on detecting the absence of an expected encoded symbol.
It is yet another purpose of the present invention to disclose methods of enhancing the operation of the sequential decoder means in correcting of errors involving distance calculations, (eg. Hamming and Euclidean distances).
It is a further purpose of the present invention to identify use of “null-zones”, or use of modulation technique determined specific “branch points” in a sequence of bistable elements in a sequential detector means.
Other purposes and/or objectives will become obvious from a reading of the Specification and Claims.
Turning now to the Drawings, there is shown in
In use then the Arithmetic Encoder sends a binary stream of bits (x_{k}) into the Modulation-Transmission means (2), in some mapped form, (eg. mapped to +/−√{square root over (Es)} for BPSK signaling).
(Note: BPSK stands for Binary Phase Shift Keying).
In the traditional sequential decoding scenario the structure of the convolution code imposes a restriction on possible decoded sequences and, hence, on possible branch points along a path. By discarding branches in which an error has been detected, the decoding tree can be pruned such that what is left is the decoded sequence with the lowest Hamming Distance from the received sequence. The structure of the convolution code then defines the valid paths in the tree. The job of the Decoder is then primarily to find the valid path that results in a decoded sequence with the minimum distance from the received sequence.
Where Arithmetic Encoders are utilized, the situation is not as simple. To apply sequential decoding procedures to the case wherein an Arithmetic Encoder is utilized, two considerations become important:
The first requirement is easily satisfied if use is made of error detection space in the Arithmetic Encoder, and in fact, it is noted that it is satisfied in a stronger manner than where convolution encoding is utilized. That is, in the Arithmetic Encoding case in which use is made of error detection space, the appearance of a symbol corresponding to the error detection space is a definite indication of error.
The second requirement is not as easily satisfied. This is because unlike in the convolution encoder case, the output of the arithmetic encoder is not restricted in terms of bit patterns which it can output, hence, an associated tree would have each bit as a branch point and the tree grows exponentially with the number of bits in a sequence. Thus it becomes necessary to identify specific branch points which are most-likely to be the location of error, and to arrive at a more rational code tree. Present invention methodology makes use of information available at the output of the Modulation-Transmission means (2), (ie. a Channel), to obtain what are the most likely error branch points.
Assuming binary BSPK signalling and an additive white Gaussian noise channel, the signal space can be represented as shown in
The number of possible paths can be represented as a fully connected binary Trellis, such as shown in 3. The heavy lines in the
To aide with understanding, suppose that at point “X” in
As a specific example, consider that the output of an Arithmetic Encoder is transmitted using a binary signalling scheme with √{square root over (E_{s=1})}. Further consider that said output is transmitted over a Modulation-Transmission means (2), (ie. a Channel), which corrupts it with additive noise such that the output of a signal receiver would provide:
R_{k}={−1.06, −1.06, −0.14, 1.56, −1.11, −1.39 . . . 0.09. 0.04. 0.67. −1.55, 1.03, 0.71}.
If Δ=0.1 is chosen for the null zone magnitude, and hard decoding is performed on the received values while marking bits corresponding to signals that fall in the null zone by a “*”, then the following results:
{circumflex over (X)}_{Δ=0.1}={0,0,0,1,0,0,*1,*1,1,0,1,1};
and the Tree of
{circumflex over (X)}_{Δ=0.2}={0,0,
where denotes the explored branch point and the * denotes unexplored branch points.
In order to capture an error it is sufficient that Δ be greater than the magnitude of the error. It would seem then that selecting a large value for Δ is desirable, however, as already mentioned, such an approach leads to proliferation of branches in a resulting Tree. Further, it is known that small magnitude errors are more likely than are large magnitude errors, and as a result large values of Δ typically do not provide significant benefit. Also, it is noted that the probability of an error being within the last “n” symbols is:
1−(1−ε)^{n }
and as a result it is possible to keep the default value of small, and increase it for signals corresponding to the last “n” symbols when an error is detected. Another consideration is that the probablity of an error being in a symbol close to the point at which an error is detected is higher than the probability of the error being in a symbol further away. With that in mind it is again noted that the reason for increasing the value of Δ is to increase the number of branch points and that if the number of branch points is increased too far, computational time can be wasted pursuing wrong paths. This leads to the insight that the null zone magnitude can beneficially be adjusted in a discriminating manner, and an algorithm enabling this is:
It is noted that in an arithmetic decoder an error will almost always propagate. However, the use of detection space essentially guarantees that any error will eventually be detected. The “Depth First” algorithm allows correction of the errors by exploring branches of a code tree, but said approach can become computationally expensive. It is, however, possible to prune a code tree in order to reduce the number of computations. Several constraints can be used to accomplish said pruning the code tree, and the inventors herein have made use of the fact that making incorrect “corrections” causes increased deviation from a correct path. Detection of proceeding along an incorrect path can be accomplished by, for instance, keeping track of Hamming distance, and/or keeping track of a Squared distance in the Euclidean sense.
Regarding the Hamming distance approach, keeping continuous track of the number of corrections still extant is key, with said count being compared against a threshold (T_{h}). The value of (T_{h}) is the maximum Hamming Distance between a received and decoded sequence which it is decided can be tolerated. The reasoning is that the probability of more errors is less than the probability of fewer errors, and that if an additional correction make the number of corrections extant greater than (T_{h}), then the null zone should be expanded by increasing the value of Δ. Expanding the null zone increases the number of possible branch points and this increases the possibilities for decoding sequences at a distance (T_{h}) or less from the received sequence.
Regarding the approach based on Euclidean distance, a squared distance between received and decoded symbols is monitored. A running sum of the distance between the sequential decoder output (x_{k}) and the received sequence is computed and compared to the distance between the output of the hard decision decoder (x_{k}) and the received sequence. At a time “n” this is accomplished by comparing the
Euclidean distance for the sequential decoder means:
where m is the encoded bit sequence length; with a threshold:
where α is an experimentally determined offset. The idea is that as hard decisions are changed the Euclidean distance between decoded and received sequences increases. If the distance increases at a high rate it can be detected and is indicative of proceeding down a wrong path. If a high rate of increase is detected the decoder takes the same action as it did for the case where (T_{h}) is exceeded under the Hamming approach. If this approach becomes too restrictive, the offset which is considered acceptable can be incremented.
The value of (T_{h}) can be initialized to 1.0 if it is desired to explore all single error events, with increases in (T_{h}) being implemented only when a maximum value of Δ is applied. It is noted, however, that single errors with large Δ may actually be less a problem than double errors with a small Δ. Thus, it can be advantageous to increase the value of (T_{h}) before increasing Δ.
In view of the foregoing, it should be appreciated that there are three parameters which can be varied in controlling the discard criteria, namely:
Δ; (T_{h}); and α.
In the following two present invention application scenarios are discussed, namely Breadth First and Depth First. In the Depth First approach the complexity depends almost completely on the number of symbol decodings that take place during a packet decoding. For a Breadth First approach, two major factors affect the complexity. The first is the average number of decodings that take place during the decoding of a packet, which remains less than M times the number of symbols in a given packet. The second factor is the sorting that takes place before an expansion at a branch point.
With the foregoing in mind, additional comments are appropriate regarding two distinguished approaches to Decoding, (ie. Breadth First and Depth First).
Applying the Breadth First approach, involves fixing the size of the null zone prior to decoding. It is desirable to keep the null zone small to reduce the number of branch points, and hence the amount of computation, small. At the same time it is necessary to utilize a null zone sufficiently large that the probability of missing an error is below what it is determined can be tolerated. Assuming an AWGN channel with a known SNR, Δ can be selected as:
where m is the number of bits per packet, p is the channel error probability, and q is the desired lower bound on packet decoding rate. The function Q is given by:
For this value of Δ the average number of branch points can be calculated as:
which simplifies to:
B(p, q)=m[Q((1−η)Q^{−1}(p))−Q((1+η)Q^{−1}(p))] (3)
where η=Δ(p,q) and m is the average number of bits per packet. In this implementation, detection of the error detection symbol by the decoder is used to prune the code tree, and the Euclidean distance between the decoded and received sequence is used for selecting the best M paths. However, picking the value of M involves tradeoffs with larger values of M increasing the probability that a correct path will be discarded. The solution adopted was to first perform decoding using a small value of M. If this does not result in successful decoding then M is increased by a value M_{inc }and the procedure is repeated. Said procedure is repeated until the packet is decoded or a predetermined threshold M_{max }is reached, at which point a decoding error is declared.
It is further noted, in the context of a Breadth First approach, that knowing the Modulation Technique applied can allow determination of Specifc Bistable Elements in a Sequential Decoder means which serve as fixed “Branch Points”.
Of course the selected series of sequential bits will be determined by at least one criteria being met, said criteria being for instance:
To implement the Depth First approach the parameters required are:
It has been found useful to define two thresholds T_{h,t }and T_{h,w }for Hamming distance and two thresholds α_{t }and α_{w }for the Euclidean distance. The total Hamming distance between the decoded sequence and the sequence obtained by hard decision decoding to the threshold T_{h,t }as previously described. The Hamming distance between the decoded sequence on the code Tree and the sequence obtained by hard decision decoding in a sliding window of size L_{w }to the threshold T_{h,w}. The end point of the sliding window is the current bit. A similar procedure is used for the Euclidean distance. It is noted that the values of T_{h, t }and T_{h,w }are obtained using two estimates of channel noise variance, one for the entire received sequence σ_{t} ^{2}, and one for the sliding window of size L_{w}. The variance σ_{t} ^{2 }is translated into a channel probability error “p”, and the two thresholds are obtained as:
T_{k,f}=np(1+4σ_{t}), T_{h,w}=L_{w}p(1+8σ_{w})
In a specific case, the length of the sliding window L_{w }was set to 50, and both the T_{h }parameters were set to a minimum default value of 2. The T_{e }parameters were found by hard decision decoding to produce X and then setting the α_{t }and α_{w }to 0.2 and 2.0 respectively. The value of Δ was initially set to 0.10 √{square root over (E_{s})}. When the decoder backtracked a symbol distance of L_{nz}=5, the value of Δ was increased by Δ_{inc}=0.10 √{square root over (E_{s})} to a maximum of 0.70 √{square root over (E_{s})}. If the decoder backtracked to the root of the code Tree, the values of T_{h,t }and T_{e,t }
were increased by 10% and the values of T_{h,w }and T_{e,w }were increased by 20%.
Computational effort was determined by computing the ratio of the total number of decode operations performed by the decoder to the number of symbols transmitted. In the case where no errors occurred this ratio is one. When an error is detected, because of backtracks, the decoding scheme requires more decode operations than the number of symbols transmitted resulting in a value greater than one. When said ratio exceeded 10^{3 }a decoding failure was declared.
Table 1 presents the results of using the depth first decoding approach in terms of packet recovery rates for the four different values of the error detection space:
TABLE 1 | ||||||
Packet Recovery Rates for Depth First Decoding | ||||||
p_{c }= 10^{−1.5} | p_{c }= 10^{−2.0} | p_{c }= 10^{−2.5} | p_{c }= 10^{−3.0} | |||
NONE | 0.00 | 0.00 | 0.01 | 24.64 | ||
ε = 0.08 | 0.00 | 0.39 | 46.63 | 96.72 | ||
ε = 0.16 | 0.00 | 17.04 | 95.94 | 99.17 | ||
ε = 0.29 | 0.00 | 71.09 | 99.21 | 99.56 | ||
ε = 0.50 | 0.19 | 88.23 | 99.51 | 99.66 | ||
However, for higher error rates the recovery rates drop significantly. Note that for a given channel error probability the amount of error space that is used is inversely proportional to the probability of packet loss.
To implement the Breadth First approach various parameter values were selected as follows: M=200, M_{inc}=1800, and M_{max}=2000. Δ was chosen to be {1.20, 1.00, 0.91, 0.82} for channel error probabilities of {10^{−15}, 10^{−2}, 10^{−2.5}, 10^{−3}}, respectively. The parameters used give the lower bounds on packet loss rates of {10^{−1.5}10^{−3}, 10^{−4}, 10^{−5}}, respectively. It should be recalled that he algorithm functions by first listing all possible paths at a branch point, then pruning all but the M which are closest in Euclidean distance, to the received sequence. Between the branch point paths get pruned because progressing along them results in the decoding of the error of the detection space.
Table 2 presents recovery rates for the case where Breadth First decoding was applied.
TABLE 2 | ||||||
Packet Recovery Rates for Breadth First Decoding | ||||||
p_{c }= 10^{−1.5} | p_{c }= 10^{−2.0} | p_{c }= 10^{−2.5} | p_{c }= 10^{−3.0} | |||
NONE | 0.00 | 0.00 | 0.01 | 24.64 | ||
ε = 0.08 | 0.00 | 38.53 | 99.89 | 99.99 | ||
ε = 0.16 | 0.00 | 92.03 | 99.94 | 99.99 | ||
ε = 0.29 | 16.63 | 99.30 | 99.95 | 99.99 | ||
ε = 0.50 | 73.24 | 99.33 | 99.95 | 99.99 | ||
Finally, performance of the present invention Joint Source Channel Coding Strategy is compared to that of three conventional schemes:
TABLE 3 | |||
Coding Rates | |||
| |||
NONE | 1.000 | ||
ε = 0.29 | 0.901 | ||
ε = 0.50 | 0.819 | ||
(223, 255) RS | 0.875 | ||
⅘ Conv, s = 8 | 0.800 | ||
⅘ Conv, s = 16 | 0.800 | ||
{2.368, 4.323, 5.714, 6.790} decibels.
Continuing, the amount of redundancy indicated in Table 3 shows that the convolutional codes have the highest amount thereof, followed by the present invention scheme with ε=0/5. The present invention scheme with ε=0.29 has the lowest amount of added redundancy of the schemes compared. It should be specifically appreciated that the present invention algorithm is only slightly more complex than a standard Arithmetic encoding scheme, with the added complexity being present primarily at the decoder.
In the Depth First approach the complexity depends almost completely on the number of symbol decodings that take place during a packet decoding, hence the complexity is slightly more than the average number of symbol decodings for a given SNR.
As alluded to earlier, for a Breadth First approach, two major factors affect the complexity. The first is the average number of decodings that take place during the decoding of a packet, which remains less than M times the number of symbols in a given packet. The averages can be seen in
Present invention schemes provide substantial packet recovery rates at channel rates as low as 10^{−1.5 }with low coding overhead. Such schemes are useful in hostile communication environments where minimal coding overhead is advantageous. The approach may be especially useful for mobile and wireless applications.
The present invention can be applied in communication systems which operate based on Binary Phase Shift Keying (BPSK), Quadrature Phase Shift Keying (QPSK) and Trellis Coded Modulation (TCM) etc.
It is noted that the terminology “variable length” refers to the length of code words assigned to input symbols, and the “joint source-channel symbol encoding” refers the use of the same encoding means to encode “allowed alphabetic symbols” and “non-alphabet symbols” for use in error correction.
Finally, it is noted that the present invention is primarily useful when applied with variable length symbol coding methods. For example Huffman coding provides coding more probably symbols with shorter bit sequences. Arithmetic encoders code strings of symbols in a sequence of bits, and Claim language structure is focused to apply thereto.
Having hereby disclosed the subject matter of the present invention, it should be obvious that many modifications, substitutions, and variations of the present invention are possible in view of the teachings. It is therefore to be understood that the invention may be practiced other than as specifically described, and should be limited in its breadth and scope only by the Claims.
Cited Patent | Filing date | Publication date | Applicant | Title |
---|---|---|---|---|
US3602886 * | Jul 25, 1968 | Aug 31, 1971 | Ibm | Self-checking error checker for parity coded data |
US4286256 | Nov 28, 1979 | Aug 25, 1981 | International Business Machines Corporation | Method and means for arithmetic coding utilizing a reduced number of operations |
US4295125 | Apr 28, 1980 | Oct 13, 1981 | International Business Machines Corporation | Method and means for pipeline decoding of the high to low order pairwise combined digits of a decodable set of relatively shifted finite number of strings |
US4862464 | Dec 30, 1987 | Aug 29, 1989 | Paradyne Corporation | Data error detector for digital modems using trellis coding |
US5097151 | Feb 13, 1991 | Mar 17, 1992 | U.S. Philips Corporation | Sequential finite-state machine circuit and integrated circuit |
US5200962 | Nov 3, 1988 | Apr 6, 1993 | Racal-Datacom, Inc. | Data compression with error correction |
US5206864 | Dec 4, 1990 | Apr 27, 1993 | Motorola Inc. | Concatenated coding method and apparatus with errors and erasures decoding |
US5233629 | Jul 26, 1991 | Aug 3, 1993 | General Instrument Corporation | Method and apparatus for communicating digital data using trellis coded qam |
US5311177 | Jun 19, 1992 | May 10, 1994 | Mitsubishi Denki Kabushiki Kaisha | Code transmitting apparatus with limited carry propagation |
US5317428 | Nov 25, 1992 | May 31, 1994 | Canon Kabushiki Kaisha | Image encoding method and apparatus providing variable length bit stream signals |
US5418863 | Jul 28, 1993 | May 23, 1995 | Canon Kabushiki Kaisha | Imaging coding device |
US5517511 | Nov 30, 1992 | May 14, 1996 | Digital Voice Systems, Inc. | Digital transmission of acoustic signals over a noisy communication channel |
US5587710 | Mar 24, 1995 | Dec 24, 1996 | National Semiconductor Corporation | Syntax based arithmetic coder and decoder |
US5708510 | May 16, 1995 | Jan 13, 1998 | Mitsubishi Denki Kabushiki Kaisha | Code conversion system |
US5710826 | Feb 2, 1994 | Jan 20, 1998 | Canon Kabushiki Kaisha | Image encoding method |
US5715332 | Mar 7, 1995 | Feb 3, 1998 | Canon Kabushiki Kaisha | Image transmission method, and apparatus therefor |
US5745504 | Jun 25, 1996 | Apr 28, 1998 | Telefonaktiebolaget Lm Ericsson | Bit error resilient variable length code |
US5774081 | Dec 11, 1995 | Jun 30, 1998 | International Business Machines Corporation | Approximated multi-symbol arithmetic coding method and apparatus |
US5841794 | May 27, 1994 | Nov 24, 1998 | Sony Corporation | Error correction processing method and apparatus for digital data |
US5870405 | Mar 4, 1996 | Feb 9, 1999 | Digital Voice Systems, Inc. | Digital transmission of acoustic signals over a noisy communication channel |
US5894486 * | Apr 2, 1996 | Apr 13, 1999 | Nec Corporation | Coding/decoding apparatus |
US5910967 | Oct 20, 1997 | Jun 8, 1999 | Sicom, Inc. | Pragmatic encoder and method therefor |
US5983382 | Dec 31, 1996 | Nov 9, 1999 | Lucent Technologies, Inc. | Automatic retransmission query (ARQ) with inner code for generating multiple provisional decodings of a data packet |
US6009203 | Aug 14, 1997 | Dec 28, 1999 | Advanced Micro Devices, Inc. | Method and apparatus for hybrid VLC bitstream decoding |
US6236645 | Mar 9, 1998 | May 22, 2001 | Broadcom Corporation | Apparatus for, and method of, reducing noise in a communications system |
US6332043 * | Nov 27, 1998 | Dec 18, 2001 | Sony Corporation | Data encoding method and apparatus, data decoding method and apparatus and recording medium |
US6349138 | Apr 23, 1997 | Feb 19, 2002 | Lucent Technologies Inc. | Method and apparatus for digital transmission incorporating scrambling and forward error correction while preventing bit error spreading associated with descrambling |
JPS63215226A | Title not available |
Reference | ||
---|---|---|
1 | "Implementation Issues in MAP Joint Source/Channel Coding", Sayood, Liu and Gibson, Proc. 22nd Annular Asilomar Conf. on Circuits, Systems, and Computers, p. 102-106, IEEE, (Nov. 1988). | |
2 | "Joint Source Channel Coding for Variable Length Codes", Sayood, Otu and Demir, IEEE Transactions on Communications, 48 p. 787-794, (May 2000). | |
3 | Boyd et al., "Integrating Error Detection into Arithmetic Coding", IEEE Transactions on Communications, 45(1), p. 1-3, (Jan. 1997). | |
4 | Elmasry, "Arithmetic Coding Algorithm with Embedded Channel Coding", Electronics Lett., 33 p. 1687-1688, (Sep. 1997). | |
5 | Gitlin et al., "Null Zone Decision Feedback Equalizer Incorporating Maximum Likelihood Bit Detection", IEEE Trans. on Communications, 23 p. 1243-50 (Nov. 1975). | |
6 | Kozintsev et al., "Image Transmission Using Arithmetic Coding Based on Continuous Error Detection", Proc. Of Data Compression Conf. p. 339-348, IEEE Computer Society Press, (1998). | |
7 | Maxted et al., "Error Recovery for Variable Length Codes", IEEE Trans. on Information Theory, IT-31, p. 794-801, (Nov. 1985) examines the effect of errors on variable length codes. | |
8 | Monaco, "Error Recovery for Variable Length Codes", IEEE Trans. on Information Theory, IT-33, p. 454-456, (May 1987) provides corrections and additions to the Maxted article. | |
9 | Mruad and Fuja, "Robust Transmissions of Variable-Length Encoded Sources", in Proc. IEEE Wireless and Networking Conf. 1999, (Sep. 1999). | |
10 | Park et al., "Decoding Entropy-Coded Symbols Over Noisy Channels by MAP Sequency Estimation for Asynchronous HMMs", Proc. Conference on Information Sciences and Systems, IEEE, (Mar. 1999). | |
11 | Rahman et al., "Effects of a Binary Symmetric Channel on the Synchronization Recovery of Variable Length Codes", Computer Journal 32, p. 246-251, (Jan. 1989). | |
12 | Rahman et al., "Effects of a Binary Symmetric Channel on the Synchronization Recovery of Variable Length Codes", Computer Journal 32, p. 783-792, (Feb. 1994). | |
13 | Regunathan et al., "Robust Image Compression for time Varying Channels", Proc. Thirty-First Asilomar Conf. on Signals, Systems and Computers, p. 968-972, (Nov. 1997). | |
14 | Sayood et al., "Implementation Issues in MAP Joint Source/Channel Coding", Proc. 22nd Annular Asilomar Conf. On Circuits, Systems, and Computers, p. 102-106, IEEE, (Nov. 1988). | |
15 | Sayood et al., "Joint Source Channel Coding for Variable Length Codes", IEEE Transactions on Communications, 48 p. 787-794, (May 2000). | |
16 | Sayood et al., Joint Source/Channel Coding for Variable-Length Encoded Sources. In Proc. IEEE Wireless and Networking Conference 1999. IEEE Sep. 1999. | |
17 | Sherwood et al., "Progressive Image Coding for Noisy Channels", IEEE Signal Processing Lett., 4 p. 189-191, (Jul. 1997). | |
18 | Soualhi et al., "Simplified Expression for the Expected Error Span Recovery for Variable Length Codes", Intl. J. of Electronics, vol. 75, p. 811-816, (Nov. 1993); Issue 5. | |
19 | Swaszek et al., "More on the Error Recovery for Variable Length Codes", IEEE Trans. On Information Theory, IT-41, p. 2064-2071, (Nov. 1995). | |
20 | Takishima et al., "Error States and Synchronization Recovery for Variable Length Codes", IEEE Trans. On Communications, 42, p. 783-792, Feb.-Apr. 1994. | |
21 | Viterbi et al., "Principles of Digital Communications and Coding", McGraw Hill, 1979. | |
22 | Wozencraft et al., "Principles of Communication Engineering", John Wiley & Sons, New York, 1965. | |
23 | Yang et al., "Robust Image Compression Based on Self-Synchronizing Huffman Code and Inter-Subband Dependency", Proc. Thirty-Second Asilomar Conference on Signals, Systems and Computers, p. 1631-1635, IEEE, Nov. 1998. | |
24 | Yang, "Robust Image Compression Based on Self-Synchronizing Huffman Code and Inter-Subband Dependency", Proc. Thirty-Second Asilomar Conference on Signals, Systems and Computers, p. 986-972, Nov. 1997. |
U.S. Classification | 714/779, 370/277, 714/752, 380/200, 714/786 |
International Classification | H03M13/01, H03M13/25, H03M13/47, H03M13/07, H04L1/00, H03M7/40 |
Cooperative Classification | H04L1/0054, H03M13/47, H03M7/4006, H03M13/6312, H04L1/0056, H03M13/07 |
European Classification | H04L1/00B7, H03M13/07, H03M7/40A, H04L1/00B5L, H03M13/47, H03M13/63C |
Date | Code | Event | Description |
---|---|---|---|
Sep 25, 2012 | CC | Certificate of correction | |
Oct 29, 2012 | FPAY | Fee payment | Year of fee payment: 8 |