Publication number | USRE35781 E |

Publication type | Grant |

Application number | US 08/553,235 |

Publication date | May 5, 1998 |

Filing date | Nov 7, 1995 |

Priority date | Jan 31, 1989 |

Fee status | Paid |

Publication number | 08553235, 553235, US RE35781 E, US RE35781E, US-E-RE35781, USRE35781 E, USRE35781E |

Inventors | Fumitaka Ono, Shigenori Kino, Masayuki Yoshida, Tomohiro Kimura |

Original Assignee | Mitsubishi Denki Kabushiki Kaisha |

Export Citation | BiBTeX, EndNote, RefMan |

Patent Citations (9), Non-Patent Citations (4), Referenced by (13), Classifications (9), Legal Events (2) | |

External Links: USPTO, USPTO Assignment, Espacenet | |

US RE35781 E

Abstract

A coding method of a binary Markov information source comprises the steps of providing a range on a number line from 0 to 1 which corresponds to an output symbol sequence from the information source, and performing data compression by binary expressing the position information on the number line corresponding to the output symbol sequence. The present method further includes the steps of providing a normalization number line to keep a desired calculation accuracy by expanding a range of the number line which includes a mapping range, by means of a multiple of a power of 2, when the mapping range becomes below 0.5 of the range of the number line; allocating a predetermined mapping range on the normalization number line for less probable symbols LPS proportional to its normal occurrence probability; allocating the remaining mapping range on the normalization number line for more probable symbols MPS; and reassigning the predetermined mapping range to the remaining mapping range the half of a portion where the allocated remaining range is less than 0.5, when the allocated remaining range becomes below 0.5.

Claims(12)

1. A method for coding information from a binary Markov information source by binary coding an output symbol sequence from said information source comprising less probable symbols (LPS) and more probable symbols (MPS), each having an occurrence probability, on a normalization number line, said method comprising the steps of;

a) storing in a memory storage device a normalization number line having a range from 0 to 1 which corresponds to said output symbol sequence,

b) keeping a desired calculation accuracy by expanding a range of the normalization number line which includes a mapping range by means of a multiple of a power of 2 when the mapping range becomes less than 0.5,

c) allocating a portion of said normalization number line as a predetermined mapping interval for said LPSs, said portion being proportional to the occurrence probability of said LPSs,

d) allocating the remaining portion of said number line as a mapping interval for said MPSs,

e) reassigning half of the LPS mapping interval above 0.5 to said MPS mapping interval when the LPS mapping range exceeds 0.5, and

f) repeating steps b, c, d and e.

2. A coding method as set forth in claim 1 whereas said LPS mapping interval is a power of 1/2 of the range of said number line.

3. A coding method as set forth in claim 1 further including the steps of assigning as an offset value the difference between 1 and the . .LPS.!. mapping interval after .Iadd.a current .Iaddend.step . .(e).!. .Iadd.(b).Iaddend., and coding a base value . .for subsequent use for.!. .Iadd.as a codeword by .Iaddend.calculating the offset value .Iadd.as a codeword .Iaddend.by . .subtracting said offset value from.!. .Iadd.using the difference between .Iaddend.the . .lower.!. .Iadd.upper .Iaddend.limit of said . .MPS.!. mapping range .Iadd.just after a previous step (b) and a lower limit of mapping range just .Iaddend.before . .normalization.!. .Iadd.the current step (b).Iaddend..

4. An apparatus for coding information from a binary Markov information source by binary coding an output symbol sequence comprising less probable symbols (LPSs) and more probable symbols (MPS) from said information source on a normalization number line, said LPSs and MPSs each having an occurrence probability, said apparatus comprising:

memory storage means for storing a normalization number line having a range from 0 to 1 which corresponds to said output symbol sequence,

means for keeping desired calculation accuracy by expanding a range on said normalization number line, which includes a mapping range, by a multiple power of 2 when the mapping range becomes less than 0.5,

means for allocating a portion of said normalization number line as a predetermined mapping interval for said LPSs, said portion being proportional to the occurrence probability of said LPSs,

means for allocating the remaining portion of said normalization number line as a mapping interval for said MPSs,

means for reassigning half of the LPS mapping interval above 0.5 to said MPS mapping interval when said LPS mapping interval exceeds 0.5.

5. An apparatus as set forth in claim 4 wherein said LPS mapping interval is a power of 1/2 of the range of said number line.

6. An apparatus as set forth in claim 4 further comprising means for assigning an offset value, said offset value being the difference between 1 and the . .LPS.!. .Iadd.mapping .Iaddend.interval . .before reassignment of the LPS mapping interval.!. .Iadd.after the range of the normalization number line is expanded.Iaddend., and means for coding a base value . .for subsequent use for calculating the offset value after reassignment of the LPS mapping interval by subtracting said offset value from the.!. .Iadd.as a codeword by using the difference between the upper limit of the mapping range just after a previous expansion of the normalization number line and a .Iaddend.lower limit of said . .MPS.!. mapping range before . .normalization.!. .Iadd.expansion.Iaddend.. .Iadd.

7. A method for coding information from a Markov information source by binary coding an output symbol sequence from said information source comprising less probable symbols (LPSs) and more probable symbols (MPSs), each sequence having an occurrence probability on a number line, said method comprising,

(a) storing in memory storage device a number line having a range which corresponds to said output symbol sequence;

(b) allocating a portion of said number line as a predetermined mapping interval for said LPSs, said portion being proportional to the occurrence probability of said LPS;

(c) allocating the remaining portion of said number line as a mapping interval for said MPSs; and

(d) controlling the allocating portion of said number line as a mapping interval for said LPSs by assigning a predetermined portion of the mapping interval for said LPSs above a prescribed value of said number line to said mapping interval for said MPSs, so as to maintain said portion proportional to the occurrence probability of said LPSs. .Iaddend..Iadd.

8. An apparatus for coding information from a Markov information source by binary coding an output symbol sequence from said information source comprising less probable symbols (LPSs) and more probable symbols (MPSs) from said information source on a number line, said LPSs and MPSs each having an occurrence probability, said apparatus comprising:

memory storage means for storing a number line having a range which corresponds to said output symbol sequence;

means for allocating a portion of said number line as a predetermined mapping interval for said LPSs, said portion being proportional to the occurrence probability of said LPSs;

means for allocating the remaining portion of said number line as a mapping interval for said MPSs; and

control means for controlling the allocating portion of said number line as a mapping interval for said LPSs by assigning a predetermined portion of the mapping interval for said LPSs above a prescribed value of said number line to said mapping interval for said MPSs, so as to maintain said portion proportional to the occurrence probability of said LPSs. .Iaddend..Iadd.9. A method for coding information from a Markov information source by binary coding an output symbol sequence from said information source comprising less probable symbols (LPSs) and more probable symbols (MPSs) each having an occurrence probability on a number line, said method comprising,

(a) storing in memory storage device a number line having a range which corresponds to said output symbol sequence;

(b) allocating a a portion of said number line as a predetermined mapping interval for said LPSs, said portion being proportional to the occurrence probability of said LPSs;

(c) allocating the remaining portion of said number line as a mapping interval for said MPSs; and

(d) reassigning half of the LPSs mapping interval above a prescribed value to said MPSs mapping interval when the LPSs mapping range exceeds the prescribed value, and

(e) repeating steps b, c, and d. .Iaddend..Iadd.10. An apparatus for coding information from a Markov information source by binary coding an output symbol sequence from said information source comprising less probable symbols (LPSs) and more probable symbols (MPSs) from said information source on a number line, said LPSs and MPSs each having an occurrence probability, said apparatus comprising:

memory storage means for storing a number line having a range which corresponds to said output symbol sequence;

means for allocating a a portion of said number line as a predetermined mapping interval for said LPSs, said portion being proportional to the occurrence probability of said LPSs;

means for allocating the remaining portion of said number line as a mapping interval for said MPSs; and

means for reassigning half of the LPSs mapping interval above a prescribed value to said MPSs mapping interval when said LPSs mapping range exceeds

the prescribed value. .Iaddend..Iadd.11. A decoding method for a Markov information source coded by binary coding comprising the steps of:

associating more probable symbols (symbols of a higher occurrence probability) and less probable symbols (symbols of a lower occurrence probability) to predetermined ranges on a number line on the basis of a range on a number line for a preceding symbols;

outputting a decoding signal according to a result of correspondence between the ranges and an inputted codeword;

comparing the range on the number line of more probable symbols with the range on the number line of less probable symbols; and

adjusting the range on the number line of less probable symbols and the range on the number line of more probable symbols by assigning predetermined portion of the range for said less probable symbols above a prescribed value of said number line to said range for said more probable symbols so that the range on the number line of less probable symbols does

not exceed that of the more probable symbols. .Iaddend..Iadd.12. A decoding method for a Markov information source coded by binary coding comprising the steps of:

associating more probable symbols (symbols of a higher occurrence probability) and less probable symbols (symbols of a lower occurrence probability) to predetermined ranges on a number line on the basis of a range on a number line for a preceding symbols;

outputting a decoding signal according to a result of correspondence between the ranges and an inputted codeword;

comparing a range on the number line of more probable symbols with a fixed value; and

adjusting the range on the number line of more probable symbols and the range on the number line of less probable symbols so that when a range of more probable symbols is below the fixed value on a number line, half of a value below the fixed value of a range more probable symbols is moved from the range of less probable symbols to that of more probable symbols.

.Iaddend..Iadd.13. A coding method for a Markov information source by binary coding comprising the steps of:

associating more probable symbols (symbols of a higher occurrence probability) and less probable symbols (symbols of a lower occurrence probability) to predetermined ranges on a number line on the basis of a range on a number line for a preceding symbols;

coding a signal according to a result of correspondence between the ranges to generate a codeword;

comparing the range on the number line of more probable symbols with the range on the number line of less probable symbols; and

adjusting the range on the number line of less probable symbols and the range on the number line of more probable symbols by assigning a predetermined portion of the range for said less probable symbols above a prescribed value of said number line to said range for said more probable symbols so that the range on the number line of less probable symbols does

not exceed that of the more probable symbols. .Iaddend..Iadd.14. A coding method for a Markov information source by binary coding comprising the steps of:

associating more probable symbols (symbols of higher occurrence probability) and less probable symbols (symbols of a lower occurrence probability) to predetermined ranges on a number line on the basis of a range on a number line or a preceding symbols;

coding a signal according to a result of correspondence between the ranges to generate a codeword;

comparing a range on the number line of more probable symbols with a fixed value; and

adjusting the range on the number line of more probable symbols and the range on the number line of less probable symbols so that when a range of more probable symbols is below the fixed value on a number line, half of a value below the fixed value of a range of more probable symbols is moved from the range of less probable symbols to that of more probable symbols. .Iaddend.

Description

.Iadd.This application is a continuation of application Ser. No. 08/139,561, filed Oct. 20, 1993, now abandoned. .Iaddend.

1. Field of the Invention

This invention relates to a coding method of image information or the like.

2. Description of Related Art

For coding a Markov information source, the number line representation coding system is known in which a sequence of symbols is mapped on the number line from 0.00 to 1.0 and its coordinates are coded as code words which are, for example, represented in a binary expression. FIG. 1 is a conceptual diagram thereof. For simplicity a bi-level memoryless information source is shown and the occurrence probability for "1" is set at r, the occurrence probability for "0" is set at 1-r. When an output sequence length is set at 3, the coordinates of each of the rightmost C(000) to C(111) is represented in a binary expression and cut at the digit which can be distinguished each other, and is defined as its respective code words, and decoding is possible at a receiving side by performing the same procedure as at the transmission side.

In such a sequence, the mapping interval A_{i}, and the lower-end coordinates C_{i} of the symbol sequence at time i are given as follows:

When the output symbol ai is 0 (More Probable Symbol: hereinafter called MPS).

A_{i}=(1-r)A_{i-1}

. .C_{i}=C_{i-1}.!..Iadd.C_{i}=C_{i-1}+rA_{i-1}.Iaddend.

When the output symbol ai is 1 (Probable Symbol: hereinafter called LPS),

A_{i}=rA_{i-1}

. .C_{i}=C_{i-1}+(1-r)A_{i-1}.!..Iadd.C_{i}=C_{i-1}.Iaddend.

As described in "an overview of the basic principles of the Q-Coder adaptive binary arithmetic coder (IBM journal of Research and Development Vol. 32, No. 6, November, 1988)", it is considered that in order to reduce the number of calculations such as multiplication, a set of fixed values are prepared and a certain value is selected from among them, not necessarily calculating rA_{i-1}.

That is, if rA_{i-1} of the above-mentioned expression is set at S,

when ai=0,

A_{i}=A_{i-1}-S

. .C_{i}=C_{i-1}.!..Iadd.C_{i}=C_{i-1}+S .Iaddend.

when ai=1,

A_{i}=S

. .C_{i}=C_{i-1}+(A_{i-1}-S)..!..Iadd.C_{i}=C_{i-1}.Iaddend.

However, as A_{i-1} becomes successively smaller, S is also needed to be smaller in this instance. To keep the calculation accuracy, it is necessary to multiply A_{i-1} by the second power (hereinafter called normalization). In an actual code word, the above-mentioned fixed value is assumed to be the same at all times and is multiplied by powers of 1/2 at the time of calculation (namely, shifted by a binary number).

If a constant value is used for S as described above, a problem arises when, in particular, S is large and a normalized A_{i-1} is relatively small.

An example thereof is given in the following. If A_{i-1} is slightly above 0.5, A_{i} is very small when ai is an MPS, and is even smaller than the area being given when ai is LPS. That is, in spite of the fact that the occurrence probability of MPS is essentially high, the area allocated to MPS is smaller than that allocated to LPS, leading to an decrease in coding efficiency. If it is assumed that an area allocated to MPS is always larger than that allocated to LPS, since A_{i-1} >0.5, S must be 0.25 or smaller. Therefore, when A_{i-1} is 1.0, r=0.25, and when A_{i-1} is close to 0.5, r=0.5, with the result that the occurrence probability of LPS is considered to vary between 1/4 and 1/2 in coding. If this variation can be made small, an area proportional to an occurrence probability can be allocated and an improvement in coding efficiency can be expected.

The present invention has been devised to solve the above-mentioned problems, and in particular, it is directed at an increase in efficiency when the occurrence probability of LPS is close to 1/2.

Accordingly, it is an object of the present invention to provide a coding system which, in the case where the range provided to a more probable symbol is below 0.5 on a normalized number line, by moving half of the portion where the allocated area of the more probable symbol is below 0.5 to the range of a more probable symbol from the range of LPS, a coding based on the occurrence probability of LPS can be performed.

According to the present invention, by changing S according to the value of A_{i-1}, r is stabilized and coding in response to the occurrence probability of LPS can be performed. According to the present invention, in particular when r is 1/2, coding in which r is assumed 1/2 at all times rather than based on A_{i-1}, can be performed, and high efficiency can be expected.

Also, according to the present invention, in the number line coding, an area allocated to LPS can be selected depending on the occurrence probability of LPS, therefore it has an advantage in that efficient coding can be realized.

FIG. 1 is a view of the prior art illustrating the concept of a number line coding;

FIG. 2 is a view illustrating a coding device in accordance with one embodiment of the present invention;

FIG. 3 is a flow chart for coding of one embodiment of the present invention;

FIG. 4 is a flow chart of decoding in one embodiment of the present invention; and

FIG. 5 is an example of an operation in one embodiment of the present invention.

FIG. 2 shows one embodiment of the present invention. An adder 1 adds the value of S, which is input thereto, and the output of an offset calculating unit 3 to calculate the upper-limit address of LPS. A comparator 2 compares the calculated value with 0.5. When the value is 0.5 or smaller and an occurrence symbol is MPS, the processing of the offset calculating unit 3 is stopped at the addition of the above-mentioned S. Similarly, if the comparator 2 judges that the value is 0.5 or smaller and the occurrence symbol is LPS, the base calculating unit 4 performs a base calculation and outputs the base coordinates as codes. A number of shift digits calculating unit 5 determines a multiple (2^{n} times) required for normalization (makes the effective range 0.5 to 1.0) from the value of S and outputs it as the number of shift digits.

Next, when the comparator 2 judges the value to be above 0.5 (decimal), the upper-limit address of LPS is corrected by LPS upper-limit address correcting unit 6. A base calculation is performed by the base calculating unit 4 to output the base coordinates therefrom. A shift digit calculating is performed by the number of shift digits calculating unit 5 to output the number of shift digits therefrom. Then, the output base coordinates are processed in an addition register (not shown) to form a code word. The number of shift digits which has been output from the unit 5, indicates how many digits of a code word to be next output are shifted. The code word is then added in the register. To more accurately explain the above-described process, flowcharts for coding and decoding are shown in FIGS. 3 and 4, respectively. In each of these flowcharts the where S is defined as a power of 1/2 is illustrated.

Next, a concrete example of coding will be explained. Suppose that, in FIG. 5, the coordinates are expressed in a binary system and that S is set at 1/8 or 1/4. First if S=1/8 is known from the Markov state in a Markov information source then 1 (LPS) is assigned to from . .0.001.!. .Iadd.0.000 .Iaddend.to 0.001 and 0 (MPS) is assigned to from 0.001 to 1.000. Now, if a 0 symbol occurs, the range is limited to between 0.001 to 1.000. At this time, the offset value is 0.001. For the next symbol, since it is known from the occurrence probability of 1, that S=1/4 is used in both reception and transmission 1 is assigned to from 0.001 to 0.011. At this point, if 0 occurs, the range of the number line varies from 0.011 to 1.000 Next, if S=1/4, the upper limit of the allocated range of LPS is 0.011+0.01=0.101 which exceeds 0.1 (0.5 in decimal). So a correction in which the portion exceeding 0.1 is halved is made, and the upper limit becomes 0.1001. At this point, LPS has occurred and the size of the area of LPS is 0.1001-0.011=0.0011. So if it is multiplied by 2^{2}, it exceeds 0.1 (0.5 in decimal). Therefore, the number of shift digits is 2. The base value is 0.1001-0.01=0.0101 and this value is output as a code word. A new offset value becomes 0.01, since 0.011-0.0101=0.0001 is shifted by two digits. Next, S is set at . .1/4.!..Iadd.1/8.Iaddend. and 0.01+0.001=0.011 becomes the border between 0 and 1. If 0 occurs at this point, the offset value is increased to 0.011. If S is set at 1/4 at this point, this results in 0.011+0.01=0.101, which exceeds 0.1. . .If.!. .Iadd.As .Iaddend.the portion exceeding 0.1 is halved, the value becomes 0.1001. Since the area of 0 is less than 0.1 if the symbol is 0, . .1000 corresponding to.!. a base value 0.1000 must be output as an output, and then it must be normalized 2^{1} times. In other words, 0.1000 is a base value, so a new offset value is 0.001, which is 2^{1} times (0.1001-0.1). Suppose that the next state is S=. .1/4.!. .Iadd.1/8.Iaddend. and MPS has occurred, then the . .offset value is 0.0001.!. .Iadd.border value is 0.001.Iaddend.+0.001=0.010. Further, suppose that the next state is S=1/4 and 1 (LPS) has occurred, an offset value 0.0100 is output as a code word.

A final code word becomes one which is calculated on the basis of the number of shift digits and the code words which are output as explained above (refer to the lower portion of FIG. 5).

If the value of S is selected from a set of values which are powers of 1/2, such as 1/2, 1/4, or 1/8, the multiples of powers of 2 for normalization can be constant, even if the value of S is varied by the correction when the allocated area of MPS is below 0.5 on the normalization number line. This is advantageous.

When an area is provided to 0 (MPS) and 1 (LPS) according to the above-described manner, the relationship between the value of S and the assumed occurrence probability of LPS when S is determined, is given as follows:

S≦r<S/(1/2+S)

Therefore, when S=1/2, . .r 1/2.!. .Iadd.r=1/2, .Iaddend.which indicates it is stable.

If S=1/4, 1/4≦r<1/2.

On the ocher hand, if S is fixed in a conventional manner, the assumed occurrence probability r becomes as follows:

S≦r<S/(1/2)=2S

If S=1/2, 1/2≦r<1/2,

If S=1/4, 1/4≦r<1.

That is, since the variation range of r is larger for a conventional system, the system of the present invention is more efficient.

The multi-level information source can be converted into a binary information source by tree development. Therefore, it goes without saying that the present invention can be applied to a multi-level information source.

Patent Citations

Cited Patent | Filing date | Publication date | Applicant | Title |
---|---|---|---|---|

US4028731 * | Sep 29, 1975 | Jun 7, 1977 | International Business Machines Corporation | Apparatus for compression coding using cross-array correlation between two-dimensional matrices derived from two-valued digital images |

US4070694 * | Dec 22, 1975 | Jan 24, 1978 | Olympus Optical Company Limited | Picture image information band compression and transmission system |

US4099257 * | Sep 2, 1976 | Jul 4, 1978 | International Business Machines Corporation | Markov processor for context encoding from given characters and for character decoding from given contexts |

US4177456 * | Feb 3, 1978 | Dec 4, 1979 | Hitachi, Ltd. | Decoder for variable-length codes |

US4191974 * | Feb 7, 1978 | Mar 4, 1980 | Mitsubishi Denki Kabushiki Kaisha | Facsimile encoding communication system |

US4286256 * | Nov 28, 1979 | Aug 25, 1981 | International Business Machines Corporation | Method and means for arithmetic coding utilizing a reduced number of operations |

US4355306 * | Jan 30, 1981 | Oct 19, 1982 | International Business Machines Corporation | Dynamic stack data compression and decompression system |

US4905297 * | Nov 18, 1988 | Feb 27, 1990 | International Business Machines Corporation | Arithmetic coding encoder and decoder system |

US4933883 * | May 3, 1988 | Jun 12, 1990 | International Business Machines Corporation | Probability adaptation for arithmetic coders |

Non-Patent Citations

Reference | ||
---|---|---|

1 | * | K. S. Fu et al., Robotics: Control, Sensing, Vision, and Intelligence, McGraw Hill Book Company, New York, copyright 1987, pp. 342 351. |

2 | K. S. Fu et al., Robotics: Control, Sensing, Vision, and Intelligence, McGraw-Hill Book Company, New York, copyright 1987, pp. 342-351. | |

3 | * | Pennebaker et al., An Overview of the Basic Priciples of the Q Coder Adaptive Binary Arithmetic Coder, IBM Journal of Research and Development, vol. 32, No. 6, Nov. 1988, pp. 717 726. |

4 | Pennebaker et al., An Overview of the Basic Priciples of the Q-Coder Adaptive Binary Arithmetic Coder, IBM Journal of Research and Development, vol. 32, No. 6, Nov. 1988, pp. 717-726. |

Referenced by

Citing Patent | Filing date | Publication date | Applicant | Title |
---|---|---|---|---|

US5936559 * | Jun 9, 1997 | Aug 10, 1999 | At&T Corporation | Method for optimizing data compression and throughput |

US6188334 * | May 5, 2000 | Feb 13, 2001 | At&T Corp. | Z-coder: fast adaptive binary arithmetic coder |

US6225925 * | Mar 13, 1998 | May 1, 2001 | At&T Corp. | Z-coder: a fast adaptive binary arithmetic coder |

US6281817 * | Feb 28, 2001 | Aug 28, 2001 | At&T Corp. | Z-coder: a fast adaptive binary arithmetic coder |

US6373408 | Apr 11, 2000 | Apr 16, 2002 | Mitsubishi Denki Kabushiki Kaisha | Encoding apparatus, decoding apparatus, encoding/decoding apparatus, encoding method and decoding method |

US6476740 | Dec 14, 2001 | Nov 5, 2002 | At&T Corp. | Z-coder: a fast adaptive binary arithmetic coder |

US6756921 | Dec 20, 2001 | Jun 29, 2004 | Mitsubishi Denki Kabushiki Kaisha | Multiple quality data creation encoder, multiple quality data creation decoder, multiple quantity data encoding decoding system, multiple quality data creation encoding method, multiple quality data creation decoding method, and multiple quality data creation encoding/decoding method |

US7209593 | Dec 17, 2002 | Apr 24, 2007 | Mitsubishi Denki Kabushiki Kaisha | Apparatus, method, and programs for arithmetic encoding and decoding |

US7305138 * | Jul 14, 2003 | Dec 4, 2007 | Nec Corporation | Image encoding apparatus, image encoding method and program |

US7333661 | Jul 14, 2003 | Feb 19, 2008 | Mitsubishi Denki Kabushiki Kaisha | Image coding device image coding method and image processing device |

US20030113030 * | Dec 17, 2002 | Jun 19, 2003 | Tomohiro Kimura | Encoding apparatus, decoding apparatus, encoding/decoding apparatus, encoding method, decoding method, encoding/decoding method, and programs |

US20040013311 * | Jul 14, 2003 | Jan 22, 2004 | Koichiro Hirao | Image encoding apparatus, image encoding method and program |

US20040240742 * | Jul 14, 2003 | Dec 2, 2004 | Toshiyuki Takahashi | Image coding device image coding method and image processing device |

Classifications

U.S. Classification | 341/51, 341/107, 358/1.9 |

International Classification | H03M7/40, H04N1/417 |

Cooperative Classification | H03M7/4006, H04N1/417 |

European Classification | H03M7/40A, H04N1/417 |

Legal Events

Date | Code | Event | Description |
---|---|---|---|

Apr 15, 1999 | FPAY | Fee payment | Year of fee payment: 8 |

Mar 31, 2003 | FPAY | Fee payment | Year of fee payment: 12 |

Rotate