Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUSRE35781 E
Publication typeGrant
Application numberUS 08/553,235
Publication dateMay 5, 1998
Filing dateNov 7, 1995
Priority dateJan 31, 1989
Fee statusPaid
Publication number08553235, 553235, US RE35781 E, US RE35781E, US-E-RE35781, USRE35781 E, USRE35781E
InventorsFumitaka Ono, Shigenori Kino, Masayuki Yoshida, Tomohiro Kimura
Original AssigneeMitsubishi Denki Kabushiki Kaisha
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Coding method of image information
US RE35781 E
Abstract
A coding method of a binary Markov information source comprises the steps of providing a range on a number line from 0 to 1 which corresponds to an output symbol sequence from the information source, and performing data compression by binary expressing the position information on the number line corresponding to the output symbol sequence. The present method further includes the steps of providing a normalization number line to keep a desired calculation accuracy by expanding a range of the number line which includes a mapping range, by means of a multiple of a power of 2, when the mapping range becomes below 0.5 of the range of the number line; allocating a predetermined mapping range on the normalization number line for less probable symbols LPS proportional to its normal occurrence probability; allocating the remaining mapping range on the normalization number line for more probable symbols MPS; and reassigning the predetermined mapping range to the remaining mapping range the half of a portion where the allocated remaining range is less than 0.5, when the allocated remaining range becomes below 0.5.
Images(5)
Previous page
Next page
Claims(12)
What is claimed is:
1. A method for coding information from a binary Markov information source by binary coding an output symbol sequence from said information source comprising less probable symbols (LPS) and more probable symbols (MPS), each having an occurrence probability, on a normalization number line, said method comprising the steps of;
a) storing in a memory storage device a normalization number line having a range from 0 to 1 which corresponds to said output symbol sequence,
b) keeping a desired calculation accuracy by expanding a range of the normalization number line which includes a mapping range by means of a multiple of a power of 2 when the mapping range becomes less than 0.5,
c) allocating a portion of said normalization number line as a predetermined mapping interval for said LPSs, said portion being proportional to the occurrence probability of said LPSs,
d) allocating the remaining portion of said number line as a mapping interval for said MPSs,
e) reassigning half of the LPS mapping interval above 0.5 to said MPS mapping interval when the LPS mapping range exceeds 0.5, and
f) repeating steps b, c, d and e.
2. A coding method as set forth in claim 1 whereas said LPS mapping interval is a power of 1/2 of the range of said number line.
3. A coding method as set forth in claim 1 further including the steps of assigning as an offset value the difference between 1 and the . .LPS.!. mapping interval after .Iadd.a current .Iaddend.step . .(e).!. .Iadd.(b).Iaddend., and coding a base value . .for subsequent use for.!. .Iadd.as a codeword by .Iaddend.calculating the offset value .Iadd.as a codeword .Iaddend.by . .subtracting said offset value from.!. .Iadd.using the difference between .Iaddend.the . .lower.!. .Iadd.upper .Iaddend.limit of said . .MPS.!. mapping range .Iadd.just after a previous step (b) and a lower limit of mapping range just .Iaddend.before . .normalization.!. .Iadd.the current step (b).Iaddend..
4. An apparatus for coding information from a binary Markov information source by binary coding an output symbol sequence comprising less probable symbols (LPSs) and more probable symbols (MPS) from said information source on a normalization number line, said LPSs and MPSs each having an occurrence probability, said apparatus comprising:
memory storage means for storing a normalization number line having a range from 0 to 1 which corresponds to said output symbol sequence,
means for keeping desired calculation accuracy by expanding a range on said normalization number line, which includes a mapping range, by a multiple power of 2 when the mapping range becomes less than 0.5,
means for allocating a portion of said normalization number line as a predetermined mapping interval for said LPSs, said portion being proportional to the occurrence probability of said LPSs,
means for allocating the remaining portion of said normalization number line as a mapping interval for said MPSs,
means for reassigning half of the LPS mapping interval above 0.5 to said MPS mapping interval when said LPS mapping interval exceeds 0.5.
5. An apparatus as set forth in claim 4 wherein said LPS mapping interval is a power of 1/2 of the range of said number line.
6. An apparatus as set forth in claim 4 further comprising means for assigning an offset value, said offset value being the difference between 1 and the . .LPS.!. .Iadd.mapping .Iaddend.interval . .before reassignment of the LPS mapping interval.!. .Iadd.after the range of the normalization number line is expanded.Iaddend., and means for coding a base value . .for subsequent use for calculating the offset value after reassignment of the LPS mapping interval by subtracting said offset value from the.!. .Iadd.as a codeword by using the difference between the upper limit of the mapping range just after a previous expansion of the normalization number line and a .Iaddend.lower limit of said . .MPS.!. mapping range before . .normalization.!. .Iadd.expansion.Iaddend.. .Iadd.
7. A method for coding information from a Markov information source by binary coding an output symbol sequence from said information source comprising less probable symbols (LPSs) and more probable symbols (MPSs), each sequence having an occurrence probability on a number line, said method comprising,
(a) storing in memory storage device a number line having a range which corresponds to said output symbol sequence;
(b) allocating a portion of said number line as a predetermined mapping interval for said LPSs, said portion being proportional to the occurrence probability of said LPS;
(c) allocating the remaining portion of said number line as a mapping interval for said MPSs; and
(d) controlling the allocating portion of said number line as a mapping interval for said LPSs by assigning a predetermined portion of the mapping interval for said LPSs above a prescribed value of said number line to said mapping interval for said MPSs, so as to maintain said portion proportional to the occurrence probability of said LPSs. .Iaddend..Iadd.
8. An apparatus for coding information from a Markov information source by binary coding an output symbol sequence from said information source comprising less probable symbols (LPSs) and more probable symbols (MPSs) from said information source on a number line, said LPSs and MPSs each having an occurrence probability, said apparatus comprising:
memory storage means for storing a number line having a range which corresponds to said output symbol sequence;
means for allocating a portion of said number line as a predetermined mapping interval for said LPSs, said portion being proportional to the occurrence probability of said LPSs;
means for allocating the remaining portion of said number line as a mapping interval for said MPSs; and
control means for controlling the allocating portion of said number line as a mapping interval for said LPSs by assigning a predetermined portion of the mapping interval for said LPSs above a prescribed value of said number line to said mapping interval for said MPSs, so as to maintain said portion proportional to the occurrence probability of said LPSs. .Iaddend..Iadd.9. A method for coding information from a Markov information source by binary coding an output symbol sequence from said information source comprising less probable symbols (LPSs) and more probable symbols (MPSs) each having an occurrence probability on a number line, said method comprising,
(a) storing in memory storage device a number line having a range which corresponds to said output symbol sequence;
(b) allocating a a portion of said number line as a predetermined mapping interval for said LPSs, said portion being proportional to the occurrence probability of said LPSs;
(c) allocating the remaining portion of said number line as a mapping interval for said MPSs; and
(d) reassigning half of the LPSs mapping interval above a prescribed value to said MPSs mapping interval when the LPSs mapping range exceeds the prescribed value, and
(e) repeating steps b, c, and d. .Iaddend..Iadd.10. An apparatus for coding information from a Markov information source by binary coding an output symbol sequence from said information source comprising less probable symbols (LPSs) and more probable symbols (MPSs) from said information source on a number line, said LPSs and MPSs each having an occurrence probability, said apparatus comprising:
memory storage means for storing a number line having a range which corresponds to said output symbol sequence;
means for allocating a a portion of said number line as a predetermined mapping interval for said LPSs, said portion being proportional to the occurrence probability of said LPSs;
means for allocating the remaining portion of said number line as a mapping interval for said MPSs; and
means for reassigning half of the LPSs mapping interval above a prescribed value to said MPSs mapping interval when said LPSs mapping range exceeds
the prescribed value. .Iaddend..Iadd.11. A decoding method for a Markov information source coded by binary coding comprising the steps of:
associating more probable symbols (symbols of a higher occurrence probability) and less probable symbols (symbols of a lower occurrence probability) to predetermined ranges on a number line on the basis of a range on a number line for a preceding symbols;
outputting a decoding signal according to a result of correspondence between the ranges and an inputted codeword;
comparing the range on the number line of more probable symbols with the range on the number line of less probable symbols; and
adjusting the range on the number line of less probable symbols and the range on the number line of more probable symbols by assigning predetermined portion of the range for said less probable symbols above a prescribed value of said number line to said range for said more probable symbols so that the range on the number line of less probable symbols does
not exceed that of the more probable symbols. .Iaddend..Iadd.12. A decoding method for a Markov information source coded by binary coding comprising the steps of:
associating more probable symbols (symbols of a higher occurrence probability) and less probable symbols (symbols of a lower occurrence probability) to predetermined ranges on a number line on the basis of a range on a number line for a preceding symbols;
outputting a decoding signal according to a result of correspondence between the ranges and an inputted codeword;
comparing a range on the number line of more probable symbols with a fixed value; and
adjusting the range on the number line of more probable symbols and the range on the number line of less probable symbols so that when a range of more probable symbols is below the fixed value on a number line, half of a value below the fixed value of a range more probable symbols is moved from the range of less probable symbols to that of more probable symbols.
.Iaddend..Iadd.13. A coding method for a Markov information source by binary coding comprising the steps of:
associating more probable symbols (symbols of a higher occurrence probability) and less probable symbols (symbols of a lower occurrence probability) to predetermined ranges on a number line on the basis of a range on a number line for a preceding symbols;
coding a signal according to a result of correspondence between the ranges to generate a codeword;
comparing the range on the number line of more probable symbols with the range on the number line of less probable symbols; and
adjusting the range on the number line of less probable symbols and the range on the number line of more probable symbols by assigning a predetermined portion of the range for said less probable symbols above a prescribed value of said number line to said range for said more probable symbols so that the range on the number line of less probable symbols does
not exceed that of the more probable symbols. .Iaddend..Iadd.14. A coding method for a Markov information source by binary coding comprising the steps of:
associating more probable symbols (symbols of higher occurrence probability) and less probable symbols (symbols of a lower occurrence probability) to predetermined ranges on a number line on the basis of a range on a number line or a preceding symbols;
coding a signal according to a result of correspondence between the ranges to generate a codeword;
comparing a range on the number line of more probable symbols with a fixed value; and
adjusting the range on the number line of more probable symbols and the range on the number line of less probable symbols so that when a range of more probable symbols is below the fixed value on a number line, half of a value below the fixed value of a range of more probable symbols is moved from the range of less probable symbols to that of more probable symbols. .Iaddend.
Description

.Iadd.This application is a continuation of application Ser. No. 08/139,561, filed Oct. 20, 1993, now abandoned. .Iaddend.

BACKGROUND OF THE INVENTION

1. Field of the Invention

This invention relates to a coding method of image information or the like.

2. Description of Related Art

For coding a Markov information source, the number line representation coding system is known in which a sequence of symbols is mapped on the number line from 0.00 to 1.0 and its coordinates are coded as code words which are, for example, represented in a binary expression. FIG. 1 is a conceptual diagram thereof. For simplicity a bi-level memoryless information source is shown and the occurrence probability for "1" is set at r, the occurrence probability for "0" is set at 1-r. When an output sequence length is set at 3, the coordinates of each of the rightmost C(000) to C(111) is represented in a binary expression and cut at the digit which can be distinguished each other, and is defined as its respective code words, and decoding is possible at a receiving side by performing the same procedure as at the transmission side.

In such a sequence, the mapping interval Ai, and the lower-end coordinates Ci of the symbol sequence at time i are given as follows:

When the output symbol ai is 0 (More Probable Symbol: hereinafter called MPS).

Ai =(1-r)Ai-1 

. .Ci =Ci-1 .!..Iadd.Ci =Ci-1 +rAi-1 .Iaddend.

When the output symbol ai is 1 (Probable Symbol: hereinafter called LPS),

Ai =rAi-1 

. .Ci =Ci-1 +(1-r)Ai-1 .!..Iadd.Ci =Ci-1 .Iaddend.

As described in "an overview of the basic principles of the Q-Coder adaptive binary arithmetic coder (IBM journal of Research and Development Vol. 32, No. 6, November, 1988)", it is considered that in order to reduce the number of calculations such as multiplication, a set of fixed values are prepared and a certain value is selected from among them, not necessarily calculating rAi-1.

That is, if rAi-1 of the above-mentioned expression is set at S,

when ai=0,

Ai =Ai-1 -S

. .Ci =Ci-1 .!..Iadd.Ci =Ci-1 +S .Iaddend.

when ai=1,

Ai =S

. .Ci =Ci-1 +(Ai-1 -S)..!..Iadd.Ci =Ci-1 .Iaddend.

However, as Ai-1 becomes successively smaller, S is also needed to be smaller in this instance. To keep the calculation accuracy, it is necessary to multiply Ai-1 by the second power (hereinafter called normalization). In an actual code word, the above-mentioned fixed value is assumed to be the same at all times and is multiplied by powers of 1/2 at the time of calculation (namely, shifted by a binary number).

If a constant value is used for S as described above, a problem arises when, in particular, S is large and a normalized Ai-1 is relatively small.

An example thereof is given in the following. If Ai-1 is slightly above 0.5, Ai is very small when ai is an MPS, and is even smaller than the area being given when ai is LPS. That is, in spite of the fact that the occurrence probability of MPS is essentially high, the area allocated to MPS is smaller than that allocated to LPS, leading to an decrease in coding efficiency. If it is assumed that an area allocated to MPS is always larger than that allocated to LPS, since Ai-1 >0.5, S must be 0.25 or smaller. Therefore, when Ai-1 is 1.0, r=0.25, and when Ai-1 is close to 0.5, r=0.5, with the result that the occurrence probability of LPS is considered to vary between 1/4 and 1/2 in coding. If this variation can be made small, an area proportional to an occurrence probability can be allocated and an improvement in coding efficiency can be expected.

SUMMARY OF THE INVENTION

The present invention has been devised to solve the above-mentioned problems, and in particular, it is directed at an increase in efficiency when the occurrence probability of LPS is close to 1/2.

Accordingly, it is an object of the present invention to provide a coding system which, in the case where the range provided to a more probable symbol is below 0.5 on a normalized number line, by moving half of the portion where the allocated area of the more probable symbol is below 0.5 to the range of a more probable symbol from the range of LPS, a coding based on the occurrence probability of LPS can be performed.

According to the present invention, by changing S according to the value of Ai-1, r is stabilized and coding in response to the occurrence probability of LPS can be performed. According to the present invention, in particular when r is 1/2, coding in which r is assumed 1/2 at all times rather than based on Ai-1, can be performed, and high efficiency can be expected.

Also, according to the present invention, in the number line coding, an area allocated to LPS can be selected depending on the occurrence probability of LPS, therefore it has an advantage in that efficient coding can be realized.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a view of the prior art illustrating the concept of a number line coding;

FIG. 2 is a view illustrating a coding device in accordance with one embodiment of the present invention;

FIG. 3 is a flow chart for coding of one embodiment of the present invention;

FIG. 4 is a flow chart of decoding in one embodiment of the present invention; and

FIG. 5 is an example of an operation in one embodiment of the present invention.

EMBODIMENT

FIG. 2 shows one embodiment of the present invention. An adder 1 adds the value of S, which is input thereto, and the output of an offset calculating unit 3 to calculate the upper-limit address of LPS. A comparator 2 compares the calculated value with 0.5. When the value is 0.5 or smaller and an occurrence symbol is MPS, the processing of the offset calculating unit 3 is stopped at the addition of the above-mentioned S. Similarly, if the comparator 2 judges that the value is 0.5 or smaller and the occurrence symbol is LPS, the base calculating unit 4 performs a base calculation and outputs the base coordinates as codes. A number of shift digits calculating unit 5 determines a multiple (2n times) required for normalization (makes the effective range 0.5 to 1.0) from the value of S and outputs it as the number of shift digits.

Next, when the comparator 2 judges the value to be above 0.5 (decimal), the upper-limit address of LPS is corrected by LPS upper-limit address correcting unit 6. A base calculation is performed by the base calculating unit 4 to output the base coordinates therefrom. A shift digit calculating is performed by the number of shift digits calculating unit 5 to output the number of shift digits therefrom. Then, the output base coordinates are processed in an addition register (not shown) to form a code word. The number of shift digits which has been output from the unit 5, indicates how many digits of a code word to be next output are shifted. The code word is then added in the register. To more accurately explain the above-described process, flowcharts for coding and decoding are shown in FIGS. 3 and 4, respectively. In each of these flowcharts the where S is defined as a power of 1/2 is illustrated.

Next, a concrete example of coding will be explained. Suppose that, in FIG. 5, the coordinates are expressed in a binary system and that S is set at 1/8 or 1/4. First if S=1/8 is known from the Markov state in a Markov information source then 1 (LPS) is assigned to from . .0.001.!. .Iadd.0.000 .Iaddend.to 0.001 and 0 (MPS) is assigned to from 0.001 to 1.000. Now, if a 0 symbol occurs, the range is limited to between 0.001 to 1.000. At this time, the offset value is 0.001. For the next symbol, since it is known from the occurrence probability of 1, that S=1/4 is used in both reception and transmission 1 is assigned to from 0.001 to 0.011. At this point, if 0 occurs, the range of the number line varies from 0.011 to 1.000 Next, if S=1/4, the upper limit of the allocated range of LPS is 0.011+0.01=0.101 which exceeds 0.1 (0.5 in decimal). So a correction in which the portion exceeding 0.1 is halved is made, and the upper limit becomes 0.1001. At this point, LPS has occurred and the size of the area of LPS is 0.1001-0.011=0.0011. So if it is multiplied by 22, it exceeds 0.1 (0.5 in decimal). Therefore, the number of shift digits is 2. The base value is 0.1001-0.01=0.0101 and this value is output as a code word. A new offset value becomes 0.01, since 0.011-0.0101=0.0001 is shifted by two digits. Next, S is set at . .1/4.!..Iadd.1/8.Iaddend. and 0.01+0.001=0.011 becomes the border between 0 and 1. If 0 occurs at this point, the offset value is increased to 0.011. If S is set at 1/4 at this point, this results in 0.011+0.01=0.101, which exceeds 0.1. . .If.!. .Iadd.As .Iaddend.the portion exceeding 0.1 is halved, the value becomes 0.1001. Since the area of 0 is less than 0.1 if the symbol is 0, . .1000 corresponding to.!. a base value 0.1000 must be output as an output, and then it must be normalized 21 times. In other words, 0.1000 is a base value, so a new offset value is 0.001, which is 21 times (0.1001-0.1). Suppose that the next state is S=. .1/4.!. .Iadd.1/8.Iaddend. and MPS has occurred, then the . .offset value is 0.0001.!. .Iadd.border value is 0.001.Iaddend.+0.001=0.010. Further, suppose that the next state is S=1/4 and 1 (LPS) has occurred, an offset value 0.0100 is output as a code word.

A final code word becomes one which is calculated on the basis of the number of shift digits and the code words which are output as explained above (refer to the lower portion of FIG. 5).

If the value of S is selected from a set of values which are powers of 1/2, such as 1/2, 1/4, or 1/8, the multiples of powers of 2 for normalization can be constant, even if the value of S is varied by the correction when the allocated area of MPS is below 0.5 on the normalization number line. This is advantageous.

When an area is provided to 0 (MPS) and 1 (LPS) according to the above-described manner, the relationship between the value of S and the assumed occurrence probability of LPS when S is determined, is given as follows:

S≦r<S/(1/2+S)

Therefore, when S=1/2, . .r 1/2.!. .Iadd.r=1/2, .Iaddend.which indicates it is stable.

If S=1/4, 1/4≦r<1/2.

On the ocher hand, if S is fixed in a conventional manner, the assumed occurrence probability r becomes as follows:

S≦r<S/(1/2)=2S

If S=1/2, 1/2≦r<1/2,

If S=1/4, 1/4≦r<1.

That is, since the variation range of r is larger for a conventional system, the system of the present invention is more efficient.

The multi-level information source can be converted into a binary information source by tree development. Therefore, it goes without saying that the present invention can be applied to a multi-level information source.

Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US4028731 *Sep 29, 1975Jun 7, 1977International Business Machines CorporationApparatus for compression coding using cross-array correlation between two-dimensional matrices derived from two-valued digital images
US4070694 *Dec 22, 1975Jan 24, 1978Olympus Optical Company LimitedPicture image information band compression and transmission system
US4099257 *Sep 2, 1976Jul 4, 1978International Business Machines CorporationMarkov processor for context encoding from given characters and for character decoding from given contexts
US4177456 *Feb 3, 1978Dec 4, 1979Hitachi, Ltd.Decoder for variable-length codes
US4191974 *Feb 7, 1978Mar 4, 1980Mitsubishi Denki Kabushiki KaishaFacsimile encoding communication system
US4286256 *Nov 28, 1979Aug 25, 1981International Business Machines CorporationMethod and means for arithmetic coding utilizing a reduced number of operations
US4355306 *Jan 30, 1981Oct 19, 1982International Business Machines CorporationDynamic stack data compression and decompression system
US4905297 *Nov 18, 1988Feb 27, 1990International Business Machines CorporationArithmetic coding encoder and decoder system
US4933883 *May 3, 1988Jun 12, 1990International Business Machines CorporationProbability adaptation for arithmetic coders
Non-Patent Citations
Reference
1 *K. S. Fu et al., Robotics: Control, Sensing, Vision, and Intelligence, McGraw Hill Book Company, New York, copyright 1987, pp. 342 351.
2K. S. Fu et al., Robotics: Control, Sensing, Vision, and Intelligence, McGraw-Hill Book Company, New York, copyright 1987, pp. 342-351.
3 *Pennebaker et al., An Overview of the Basic Priciples of the Q Coder Adaptive Binary Arithmetic Coder, IBM Journal of Research and Development, vol. 32, No. 6, Nov. 1988, pp. 717 726.
4Pennebaker et al., An Overview of the Basic Priciples of the Q-Coder Adaptive Binary Arithmetic Coder, IBM Journal of Research and Development, vol. 32, No. 6, Nov. 1988, pp. 717-726.
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US5936559 *Jun 9, 1997Aug 10, 1999At&T CorporationMethod for optimizing data compression and throughput
US6188334 *May 5, 2000Feb 13, 2001At&T Corp.Z-coder: fast adaptive binary arithmetic coder
US6225925 *Mar 13, 1998May 1, 2001At&T Corp.Z-coder: a fast adaptive binary arithmetic coder
US6281817 *Feb 28, 2001Aug 28, 2001At&T Corp.Z-coder: a fast adaptive binary arithmetic coder
US6373408Apr 11, 2000Apr 16, 2002Mitsubishi Denki Kabushiki KaishaEncoding apparatus, decoding apparatus, encoding/decoding apparatus, encoding method and decoding method
US6476740Dec 14, 2001Nov 5, 2002At&T Corp.Z-coder: a fast adaptive binary arithmetic coder
US6756921Dec 20, 2001Jun 29, 2004Mitsubishi Denki Kabushiki KaishaMultiple quality data creation encoder, multiple quality data creation decoder, multiple quantity data encoding decoding system, multiple quality data creation encoding method, multiple quality data creation decoding method, and multiple quality data creation encoding/decoding method
US7209593Dec 17, 2002Apr 24, 2007Mitsubishi Denki Kabushiki KaishaApparatus, method, and programs for arithmetic encoding and decoding
US7305138 *Jul 14, 2003Dec 4, 2007Nec CorporationImage encoding apparatus, image encoding method and program
US7333661Jul 14, 2003Feb 19, 2008Mitsubishi Denki Kabushiki KaishaImage coding device image coding method and image processing device
Classifications
U.S. Classification341/51, 341/107, 358/1.9
International ClassificationH03M7/40, H04N1/417
Cooperative ClassificationH03M7/4006, H04N1/417
European ClassificationH03M7/40A, H04N1/417
Legal Events
DateCodeEventDescription
Mar 31, 2003FPAYFee payment
Year of fee payment: 12
Apr 15, 1999FPAYFee payment
Year of fee payment: 8