US 20020014981 A1 Abstract A binary arithmetic coder and decoder provides improved coding accuracy due to improved probability estimation and adaptation. They also provide improved decoding speed through a “fast path” design wherein decoding of a most probable symbol requires few computational steps. Coded data represents data that is populated by more probable symbols (“MPS”) and less probable symbols (“LPS”). In an embodiment, a decoder receives a segment of the coded data as a binary fraction C. It defines a coding interval of possible values of C, the interval extending from a variable lower bound A to a constant upper bound 1. For each position in the decoded symbol string, the decoder computes a test value Z that subdivides the coding interval into sub-intervals according to the relative probabilities that an MPS or an LPS occurs in the position. A first sub-interval extends from the lower bound A to the test value Z; the second sub-interval extending from the test value Z to 1. If C is greater than Z, the decoder emits an MPS for the current position in the decoded symbol string and sets the lower bound A to the test variable Z for use during decoding of the next position in the decoded symbol string. If C is less than Z, the decoder emits an LPS and computes a new lower bound A and a new binary fraction C for use during decoding of the next position in the decoded symbol string. The encoder operates according to analogous techniques to compose coded data from original data.
Claims(41) 1. A method for decoding coded data into a decoded symbol string populated by more probable symbols (“MPS”) and less probable symbols (“LPS”), the method comprising the steps of:
receiving a segment of coded data interpreted as a binary fraction C;
defining a coding interval of possible values of C, the interval extending from a variable lower bound A to a constant upper bound 1;
for each position in the decoded symbol string:
computing a test value Z subdividing the coding interval into sub-intervals in accordance with the relative probabilities that an MPS and an LPS occurs in the position, a first sub-interval extending from the lower bound A to the test value Z, the second sub-interval extending from the test value Z to 1;
if C is greater than Z:
placing an MPS at the current position in the decoded symbol string, and
setting the lower bound A to the test variable Z for use in decoding of a next position in the decoded symbol string; and
if C is less than Z:
placing an LPS at the current position in the decoded symbol string, and
computing a new lower bound A and a new binary fraction C for use in decoding of the next position in the decoded symbol string.
2. The method of computing a first test value Z1 derived from the lower bound A and from a current estimate P of the probability of the LPS symbol, computing a second test value Z2 derived the lower bound A and from the current estimate P of the probability of the LPS symbol, setting the test value Z to the lesser of Z1 and Z2. 3. The method of Z1=A+P/R1, and Z2=½+A/2−(½−P)/R2, wherein R1 and R2 are powers of two. 4. The method of 5. The method of 6. The method of defining a fence variable F; if the first test value Z1 is smaller than the fence variable F;
placing an MPS at the current position in the decoded symbol string,
setting the lower bound A to the test variable Z1 for use in decoding of a next position in the decoded symbol string, and
immediately proceeding decoding of next position in the decoded symbol string; and
whenever the binary fraction C is modified by the decoding process, redefining the fence variable F to be less than the binary fraction C. 7. The method of 8. The method of 9. The method of if the decoded symbol is a LPS, setting the parameter P to a value representing an increased estimated probability for the LPS symbol; and if the decoded symbol is a MPS; comparing the lower bound A with a threshold value M depending on P, and if the lower bound A is greater than the threshold value M, setting the parameter P to a value representing a decreased estimated probability for the LPS symbol. 10. The method of the actual value of the parameter P, the value of the threshold M, the new value of the index used for representing an increased estimated probability for the LPS symbol and the new value of the index used for representing a decreased estimated probability for the LPS symbol. 11. The method of if the decoded symbol is an LPS, setting the parameter P to a value representing an increased estimated probability for the LPS symbol. if the decoded symbol is a MPS,
comparing the test value Z with a threshold value M depending on P, and
if the test value Z is greater than the threshold value M, setting the parameter P to a value representing a decreased estimated probability for the LPS symbol.
12. The method of the actual value of the parameter P, the value of the threshold M, the new value of the index used for representing an increased estimated probability for the LPS symbol and the new value of the index used for representing a decreased estimated probability for the LPS symbol. 13. A method for encoding data, the data represented by a symbol string populated by more probable symbols (“MPS”) and less probable symbols (“LPS”), the method comprising the steps of:
initializing a code accumulator S;
defining a coding interval extending from a variable lower bound A to a constant upper bound 1;
for each position in the symbol string:
computing a test value Z subdividing the coding interval into sub-intervals in accordance with the relative probabilities of an MPS and an LPS occur in the position, a first sub-interval extending from the lower bound A to the test value Z, the second sub-interval extending from the test value Z to 1;
if the symbol located at the current position in the symbol string is a MPS, setting the lower bound A to the test variable Z for use in encoding of a next position in the decoded symbol string;
if the symbol located at the current position in the symbol string is an LPS,
adding the length of the second sub-interval to the accumulator S, and
computing a new lower bound A for use in encoding of a next position in the decoded symbol string; and
when a predefined criterion is met, outputting a segment of coded data and computing new values for both the accumulator S and the lower bound A.
14. The method of computing a first test value Z1 derived from the lower bound A and from a current estimate P of the probability of the LPS symbol, computing a second test value Z2 derived the lower bound A and from the current estimate P of the probability of the LPS symbol, setting the test value Z to the lesser of Z1 and Z2. 15. The method of 1 and Z2 are computed according to: Z1=A+P/R1, and Z2=½+A/2−(½−P)/R2, wherein R1 and R2 are powers of two. 16. The method of 17. The method of 18. The method of if the decoded symbol is a LPS, setting the parameter P to a value representing an increased estimated probability for the LPS symbol; and if the decoded symbol is a MPS;
comparing the lower bound A with a threshold value M depending on P, and
if the lower bound A is greater than the threshold value M, setting the parameter P to a value representing a decreased estimated probability for the LPS symbol.
19. The method of the actual value of the parameter P, the value of the threshold M, the new value of the index used for representing an increased estimated probability for the LPS symbol and the new value of the index used for representing a decreased estimated probability for the LPS symbol. 20. The method of if the decoded symbol is an LPS, setting the parameter P to a value representing an increased estimated probability for the LPS symbol. if the decoded symbol is a MPS,
comparing the test value Z with a threshold value M depending on P, and
if the test value Z is greater than the threshold value M, setting the parameter P to a value representing a decreased estimated probability for the LPS symbol.
21. The method of the actual value of the parameter P, the value of the threshold M, the new value of the index used for representing an increased estimated probability for the LPS symbol and the new value of the index used for representing a decreased estimated probability for the LPS symbol. 22. A method of decoding coded data into decoded data, the decoded data represented by a symbol string of more probable symbols (“MPS”) and less probable symbols (“LPS”), the method comprising:
receiving a segment of coded data as a binary fraction,
for a position in the decoded symbol string,
defining an interval of possible values of the coded data, the interval bounded by 1 and a lower bound,
computing a test variable that divides the interval into two sub-intervals according to relative probabilities that the symbol should be occupied by the MPS or the LPS, a first sub-interval extending from 1 to the test variable and associated with the MPS, a second sub-interval extending from the test variable to the lower bound and associated with the LPS,
when the coded data segment occupies the first sub-interval, placing an MPS in the position, and
when the coded data segment occupies the second sub-interval, placing an LPS in the position.
23. The method of 24. The method of 1 and the test variable for use in decoding of a next position in the decoded symbol string. 25. The method of 26. The method of 27. The method of 28. The method of 29. A method of decoding coded data into decoded data, the decoded data represented by a symbol string of more probable symbols (“MPS”) and less probable symbols (“LPS”), the method comprising:
receiving a segment of coded data as a binary fraction,
for a position in the decoded symbol string,
defining an interval of possible values of the coded data, the interval bounded by 1 and a lower bound,
computing a test variable that divides the interval into two sub-intervals according to relative probabilities that the symbol should be occupied by the MPS or the LPS, a first sub-interval extending from 1 to the test variable and associated with the MPS, a second sub-interval extending from the test variable to the lower bound and associated with the LPS,
computing a fence variable to be the lesser of the coded data segment and ½, and
when the test variable is less than the fence variable, placing an MPS in the position.
30. The method of 31. The method of 32. The method of 33. The method of 34. A method of decoding coded data into decoded data, comprising the steps of:
receiving a segment of coded data as a binary fraction, for a position in the decoded symbol string,
defining an interval of possible values of the coded data, the interval bounded by 1 and a lower bound,
computing a test variable that divides the interval into two sub-intervals according to relative probabilities that the symbol should be occupied by the MPS or the LPS, a first sub-interval extending from 1 to the test variable and associated with the MPS, a second sub-interval extending from the test variable to the lower bound and associated with the LPS,
computing a fence variable to be the lesser of the coded data segment and ½,
decoding an MPS for the position when either of the following conditions occur: the test variable is less than the fence variable, and the when the test variable is less than the segment of coded data, and
decoding an LPS for the position and performing LPS adaptation when neither of the conditions occur.
35. The method of 36. A method of decoding coded data, the coded data representing a sequence of symbols including a most probable symbol (“MPS”) and a least probable symbol (“LPS”), the method comprising:
receiving a segment of the coded data as a fractional value;
initializing a variable representing a lower limit on possible values of the segment of coded data to equal zero; and
iteratively,
calculating a test variable that divides an interval from the lower limit to one according to relative probabilities that a next symbol to be decoded is an LPS or an MPS,
calculating a fence variable representing the lesser of one half and the value of segment of coded data,
decoding the next symbol to be an MPS when the test variable is less than the fence variable,
otherwise, decoding the next symbol to be an MPS when the test variable is less than the value of segment of coded data,
otherwise, decoding the next symbol to be an LPS,
when an MPS is decoded, setting the lower limit for a next iteration equal to the test variable, and
when an LPS is decoded, setting the lower limit and the segment of coded data equal to their respective values for the instant iteration added by an amount equal to the difference between the test variable and 1.
37. The method of 38. The method of 39. A decoder adapted to perform the following functions:
receive a segment of the coded data as a fractional value; initialize a variable representing a lower limit on possible values of the segment of coded data to equal zero; and iteratively,
calculate a test variable that divides an interval from the lower limit to one according to relative probabilities that a next symbol to be decoded is an LPS or an MPS,
calculate a fence variable representing the lesser of one half and the value of segment of coded data,
decode the next symbol to be an MPS when the test variable is less than the fence variable,
otherwise, decode the next symbol to be an MPS when the test variable is less than the value of segment of coded data,
otherwise, decode the next symbol to be an LPS,
when an MPS is decoded, set the lower limit for a next iteration equal to the test variable, and
when an LPS is decoded, set the lower limit and the segment of coded data equal to their respective values for the instant iteration added by an amount equal to the difference between the test variable and 1.
40. The decoder of 41. The decoder of Description [0001] 1. Field of the Invention [0002] The present invention relates to an improved adaptive binary arithmetic coder that provides improved processing speed and accuracy over conventional arithmetic coders. [0003] 2. Related Art [0004] Arithmetic coders provide well-known algorithms for encoding data. Compression ratios of the arithmetic coders can reach the information theory limit. The arithmetic coder and decoder must possess good estimates of the probability distribution of each symbol to code. For each symbol to be coded in a string element, the encoder and decoder must possess a table containing estimated probabilities for the occurrence of each possible symbol at each point in the symbol string. The coders themselves must perform a table search and at least one multiplication. For this reason, arithmetic coders incur high computational expense. Binary adaptive arithmetic coders, such as the “Q-Coder,” by Pennebaker, et al. (1998) and the “QM-Coder,” by Ono (1993), have been developed to overcome this drawback. [0005] A high-level system diagram of a prior art binary arithmetic coder is shown in FIG. 1. Data to be coded is input to an-encoder [0006] The coding process often is described by the operation of the decoder [0007] Each of the sub-intervals can be divided into two smaller sub-intervals having lengths that are proportional to the estimated conditional probabilities of the second symbol bit given the previously encoded symbol bit. Any code string located in one of these sub-intervals represents a symbol string starting with the corresponding two bit prefix. [0008] The decoding process is repeated. Sub-intervals are themselves divided into smaller sub-intervals representing probabilities of the value of the next bit in the symbol string. The process produces a partition of the unit interval having sub-intervals that correspond to each possible value of the symbol string. Any code string in the interval can be chosen corresponding to the encoded symbol string. [0009] According to theory, when an interval is divided into sub-intervals, the length of each sub-interval should be proportional to the probability of the value of the next data symbol to be decoded given the previous symbol bits. The probability distribution of the code string therefore would be uniform in the interval. Since each code bit is equally likely to be a 0 or a 1, it would carry as much information as information theory allows. In other words, the coder would achieve entropic compression. [0010] The known Q-Coder and QM-Coder, while they represent advances over traditional arithmetic coders, do not provide performance that approaches entropic compression. Thus, there is a need in the art for a binary arithmetic coder that provides improved compression ratios than the Q-Code and the QM-Coder. [0011] Decoding speed is an important performance characteristic of data coding systems. Decoding latency, the time that is required to generate decoded data once the coded data is received should be minimized wherever possible. Thus, decoders that introduce lengthy or complex computational processes to the decoding operation are disfavored. Accordingly, there is a need in the art for a data decoding scheme that is computationally simple and provides improved throughput of decoded data. [0012] The present invention provides a binary arithmetic coder and decoder having important advantages over the prior art. The coding scheme provides improved coding accuracy over the prior art due to improved probability estimation and adaptation. It provides improved decoding speed through a “fast path” design wherein decoding of a most probable symbol requires few computational steps. [0013] According to the present invention, coded data represents data that is populated by more probable symbols (“MPS”) and less probable symbols (“LPS”). In an embodiment, the decoder receives a segment of the coded data as a binary fraction C. It defines a coding interval of possible values of C, the interval extending from a variable lower bound A to a constant upper bound 1. For each position in the decoded symbol string, the decoder computes a test value Z that subdivides the coding interval into sub-intervals according to the relative probabilities that an MPS or an LPS occurs in the position. A first sub-interval extends from the lower bound A to the test value Z; the second sub-interval extending from the test value Z to 1. If C is greater than Z, the decoder emits an MPS for the current position in the decoded symbol string and sets the lower bound A to the test variable Z for use during decoding of the next position in the decoded symbol string. If C is less than Z, the decoder emits an LPS and computes a new lower bound A and a new binary fraction C for use during decoding of the next position in the decoded symbol string. The encoder operates according to analogous techniques to compose coded data from original data. [0014]FIG. 1 is a high-level system block diagram of a known binary arithmetic coder. [0015]FIG. 2 illustrates a method of operation of a decoder according to a first embodiment of the present invention. [0016]FIGS. 3 and 4 respectively illustrate interval parameters as a function of interval splitting variables in an entropic coding application and a QM Coder of the prior art. [0017]FIG. 5 illustrates interval parameters as a function of an interval splitting variable in the present invention. [0018]FIG. 6 illustrates a method of operation of a decoder according to a second embodiment of the present invention. [0019]FIG. 7 illustrates a method of operation of an encoder according. to an embodiment of the present invention. [0020]FIG. 8 is a graph illustrating a comparison between an optimal increment parameter and an increment parameter in use in an embodiment of the present invention. [0021]FIG. 9 illustrates a method of operation of a decoder according to a third embodiment of the present invention. [0022] The present invention provides a data coding system, labeled the “Z-Coder,” that provides improved compression ratios over traditional binary arithmetic coders. The decoder of the Z-Coder system may be optimized to provide very fast decoding of coded data. [0023] To facilitate an understanding of the invention, the decoding scheme of the present invention is described first. A method of operation [0024] When C(t) is received and before the decoder tests its value, C(t) may take any value between 0 and 1 (C(t)ε [0,1[). The decoder maintains a second variable, labeled “A(t),” that represents a lower bound of possible values of C(t). Thus, the decoder sets A(1)=0 as an initial step (Step [0025] Decoding of the t [0026] However, if C(t) is less than Z(t), the next bit to be decoded is the LPS (Step [0027] After step [0028] The decoder of the present invention may be implemented in a microprocessor or a digital signal processor. In such an implementation, values of A(t) and C(t) are stored in data registers having a fixed length, such as 16 bit registers. Renormalization causes a shift of data in each register one bit position to the left. It shifts the most significant bit out of A(t) and C(t). The shift of data in the register storing C(t) permits a new bit to be retrieved from the channel and stored in the least significant position in the register. [0029] Because A(t) is always less than or equal to C(t), it is necessary to test only the first bit position of A(t). If that bit position is a one (1), then. the decoder determines that renormalization shift should be performed. [0030] The method of FIG. 2 works so long as both the encoder and the decoder use the same test values Z(t) for testing and adjusting the lower bound of register A(t). [0031] The Z-Coder provides compression ratios that approach entropic compression ratios. It provides a closer approximation of entropic compression than prior art coders. Entropic compression is achieved when Z(t) splits the interval [A(t),1[ precisely in proportion with the probabilities PLPS and PMPS. For entropic compression: [0032] Unfortunately, calculation of a test value that achieves entropic compression would require a multiplication to be performed, a computationally slow operation. FIG. 3 illustrates lines-representing the test value Z(t) as a function of PLPS for several values of A(t) under entropic conditions. The multiplication arises because each line has a different slope. FIG. 4 illustrates an approximation used by the QM-Coder implemented to avoid the slow multiplications. The QM-Coder deviates significantly from entropic compression. [0033] The Z-Coder avoids slow multiplications. The Z-Coder computes an approximation of the entropic test value using two line segments having constant slopes. Shown in FIG. 5, the first line segment has slope [0034] This solution is implemented by computing Z(t) as the minimum of the two following quantities: [0035] where p is approximately equal to but slightly lower than P [0036] when k=½, for instance, Z2(t) may be computed as ¼+Z1(t)/2. [0037] when k=¼, for instance, Z2(t) may be computed as ⅜+[A(t)+Z1(t)]/4 [0038] Multiplication of binary numbers by values which are a power of two (¼, ½, 2, 4, 8, . . . ) requires only a data shift to be performed rather than a true multiplication. Thus, the simplified expressions can be computed quickly. [0039] The decoding algorithm of FIG. 2 may be implemented in software code by the following subroutine: [0040] boolean decoder (int p, boolean mps)
[0041] The decoding method of FIG. 2 provides adaptive binary arithmetic decoding that achieves compression ratios that are much closer to entropic compression ratios than are achieved by arithmetic decoders of the prior art. The Z-Coder provides much better data compression than the prior art adaptive binary arithmetic decoders. [0042]FIG. 6 illustrates a method of operation of a decoder according to a second embodiment of the present invention. The decoding method [0043] The design of the fast decoder capitalizes upon the fact that an MPS may be returned as soon it is determined that Z1(t) is smaller than C(t) and also smaller than ½: [0044] Z(t) is rarely greater than ½ because p is often very small; [0045] Z(t) is rarely less than C(t) because PLPS is usually very small; and [0046] Re-normalization rarely occurs because the compression ratio of the Z-Coder is very good (The decoding algorithm produces many more bits than it consumes). [0047] The fast decoding method is initialized in the same manner as the traditional method of FIG. 2. A(1) is set to 0 (Step [0048] The decoder compares Z1(t) against F(t) (Step [0049] If, at step [0050] The decoder determines whether Z(t) is greater than C(t) (Step [0051] Thereafter, the decoder may perform re-normalization in a manner similar to the decoding method of FIG. 2 (Step [0052] The optimized decoder may be implemented in software using the following code: [0053] boolean decoder_fast(int p, boolean mps)
[0054] As is shown, if Z1(t) is less than or equal to F(t), labeled “fence” in the software description, the decoder performs only a single step and then returns. The remainder of the decoding sub-routine is not performed. The fast decoder therefore provides improved decoding performance by minimizing decoding latency. [0055] The encoder performs data encoding using the same principles as the decoding methods described above. A method of operation of an encoder constructed in accordance with an embodiment of the present invention is illustrated in FIG. 7. The encoder maintains a code string interval of the form [A(t)−S(t),1−S(t)[. The interval can be interpreted as a lower bound A(t) on a number that plays the same role that C(t) does in the decoder. The code string is obtained by subtracting S(t) from the number. The quantity S(t) accumulates all the terms that are added to C(t) in the LPS branch of the decoder described above with regard to either FIGS. [0056] To encode a MPS, a new interval [Z(t)−S(t),1−S(t)] must be set. This is achieved by setting A(t)+1=Z(t). To encode an LPS, a new interval must be set to [A(t)−S(t), Z(t)−S(t)[ which is readily achieved by setting A(t+1)=A(t)+1−Z(t) and S(t+1)=S(t)+1−Z(t) [0057] The encoding method is initialized by setting A(1) and S(1) to 0 (Step [0058] The encoder examines a bit in the data stream to be coded (Step [0059] Coded data bits are emitted from the encoder only if A(t)≧½. While A(t)≧½, the decoder iteratively emits a bit of the code string (as the most significant bit of 1−S(t)) and shifts A(t) and S(t) a bit position to the left (Steps [0060] Thereafter, the encoder returns to step [0061] In a microprocessor or digital signal processor implementation, the encoder again stores values of A(t) and S(t) in data registers having fixed lengths. However, it should be appreciated that when an LPS is encoded, the shift S(t+1)=S(t)+1−Z(t) may cause a carry in the register storing the S value. The carry must be preserved. Accordingly, if a 16-bit register is used for example to store values of A(t), then S(t) must be stored in a 17-bit register. Because the result of register S can overflow to a 17 [0062] Bit counting consists of delaying issuance of the coded data string until a one is emitted or until a borrow propagation turns all the zeros into ones. This method may be implemented by keeping count of the number of zeros recently emitted. [0063] Bit stuffing consists of inserting a dummy one when no lengths of sequence of zeros exceed the predefined limit. [0064] Bit stuffing may reduce the compression ratio but it sets an upper limit on the delay between encoding of a symbol and the emission of the corresponding code bits. [0065] The encoding method of FIG. 6 may be implemented in software, employing the follow code: [0066] void encoder (boolean bit, int p, boolean mps)
[0067] The encoders and decoders of the Z-Coding system use an increment parameter p that represents the estimated probabilities of the LPS and MPS symbols. This section presents an analytic derivation of the relation between the symbol probability distribution PLPS and the optimal increment parameter p. This derivation relies on the analysis of a theoretical experiment that included a decoding a random string of independent equiprobable bits with a particular value of the increment p. The probability PLPS in the decoded symbol string can be calculated with the following simplifying assumptions: [0068] A(t) contains a uniform random number in internal [0,½[. This uniform distribution hypothesis is reasonably supported by empirical evidence, as long as the greatest common divisor of the increment p and the interval size ½ is small. [0069] C(t) contains a uniform random number in interval [A(t),1[. This assumption is implied by the definition of the lower bound A(t) and by the random nature of the code string. [0070] The assumptions also eliminate dependencies between consecutive decoded symbols. It is assumes that each bit is decoded with random values A(t) and C(t), regardless of previous decoding actions. Eliminating dependencies between consecutive symbols is surprisingly realistic. Real life applications tend to mix many streams of symbols with different probabilities into a single arithmetic coder. The interleaved mixture randomizes A(t) and C(t) quite efficiently. [0071] Under these assumptions, the decoded symbols are independent identically distributed random variables. The probability of LPS can be derived using the following decomposition: ( [0072] Using this decomposition and the simplifying assumptions described above, a simple exercise in integral calculus provides analytical formulas relating P*(LPS) and p for each chosen value of the slope k (see, FIG. 8). [0073] The case k=½, for instance, resolves to the following formula: [0074] Decoding a random sequence of independent equiprobable bits produces a random sequence of independent symbols distributed as derived above. Conversely, encoding such a random sequence of symbols, under the same assumptions, produces a random sequence of equiprobable bits. That means that the increment p is the optimal increment for symbol string distribution P*(LPS). [0075] This formula has been confirmed by empirical experiments seeking the optimum increment for chosen symbol probability distributions. Encoding a random symbol string with this optimal increment produces about 0.5% more code bits than predicted by the information theoretical limit. This is probably the price of the additive approximation to the computation of the z-value. [0076] This following discussion presents a stochastic algorithm that automatically adapts the Z-Coder parameters (p and MPS) while encoding or decoding symbol strings. [0077] The adaptation algorithm must remember some information about-the observed symbol frequencies in the symbol string. It is convenient in practice to represent this information as a single integer state. Typical data compression applications maintain an array of state variables (also called “coding contexts”). Each symbol is encoded with a coding context chosen according to application specific prior information about its probability distribution. [0078] The integer state is used as an index into a table defining the actual coder parameters, i.e., the identity of the MPS (zero or one) and the probability PLPS (a number in [0,½]). The Z-Coder adaptation algorithm modifies the value of the state variable when certain conditions are verified: [0079] Encoding or decoding an LPS always triggers an LPS adaptation. The state variable is then changed to point a table entry with a larger value of the increment p, or, if the increment is already large, to point a table entry with swapped definition of the MPS and LPS symbols. [0080] Encoding or decoding an MPS triggers an MPS adaptation if and only if A(t) is greater than a threshold m in [½−p, ½[tabulated as a function of the current state. The state variable is changed to point a table entry with a smaller value of the increment p. In another embodiment encoding or decoding a MPS triggers a MPS adaptation if and only if Z(t), which is related to A(t) in a known way, is greater than a threshold tabulated as a function of the current state. [0081]FIG. 9 illustrates a method of operation [0082] Like steps from FIG. 6 are indicated with like reference numerals. After step [0083] At step [0084] The remaining discussion pertains to symmetrical linear transition tables. These tables are organized like a ladder. The first rung represents the symbol distribution with the highest probability of zeroes. The last rung represents a symbol distribution with the highest distribution of ones. Each LPS transition moves the state variable on step towards the center of the ladder. Each MPS transition moves the state variable one step towards the closest tip of the ladder. [0085] The limiting distribution of the state variable depends on the respective probabilities of the adaptation events. In the case of a symmetrical transition table, these probabilities must fulfill the following conditions: [0086] P(MPS adaptation)<P(LPS adaptation) if p is too small [0087] P(MPS adaptation)>P(LPS adaptation) if p is too large [0088] P(MPS adaptation)=P(LPS adaptation) if p is optimal [0089] These conditions imply that the probability of both adaptation events must have the same order of magnitude. The Z-Coder adaptation algorithm uses Z(t) as a pseudo-random number generator to tune the probability of the MPS adaptation events. [0090] Analytical expressions for the probabilities of the adaptation event are derived by assuming again that the lower bound register a contains a uniform random number in [0,½[. The following formulae are easily obtained by analyzing the encoding algorithm: [0091]FIG. 8 compares the adaptation event probabilities as a function of the optimal increment p when the threshold m is equal to ½. These curves show that this value of the threshold makes the probability of the MPS adaptation event too high. A larger threshold is needed to reduce the probability of MPS adaptation event until it becomes equal to the probability of LPS adaptation event. [0092] For each value of the state variable, a threshold m is chosen in order to ensure that both adaptation events occur with the same probability when the increment p is optimal for the current value of P [0093] where P*(LPS) is the expression derived above. [0094] The Z-Coder adaptation algorithm differs significantly from the adaptation scheme introduced by the Q-Coder and used by the QM-Coder. These coders perform a MPS adaptation whenever encoding or decoding a MPS produces or consumes a code bit. This is similar to using a constant threshold m=½ with the Z-Coder adaptation algorithm. An optimally tuned Q-Coder or QM-Coder therefore produces more MPS adaptation events than LPS adaptation events. This is compensated by a careful design of asymmetrical state transition tables. [0095] The Z-Coder state transition tables however are free of these constraints. This can be a significant advantage for creating efficient state transition tables in an analytically principled way. [0096] The encoder or decoder of the present invention may be provided on a processor or digital signal processor with appropriate program instructions. [0097] As shown herein, the Z-Coder is an adaptive binary arithmetic coder having the following characteristics: [0098] A new multiplication-free approximation of the interval splitting point provides an improved coding accuracy. [0099] The decoder only keeps a lower bound on the code number, a simplification that leads to very fast implementation of the decoding algorithm. [0100] The two registers used by both the encoding and the decoding algorithm require only sixteen bits and a carry bit, an implementation benefit that reduces the cost of implementation of the Z-Coder. [0101] A new probability adaptation scheme reduces the constraints on state transition tables. Referenced by
Classifications
Rotate |