CA2551281A1 - Voice/musical sound encoding device and voice/musical sound encoding method - Google Patents

Voice/musical sound encoding device and voice/musical sound encoding method Download PDF

Info

Publication number
CA2551281A1
CA2551281A1 CA002551281A CA2551281A CA2551281A1 CA 2551281 A1 CA2551281 A1 CA 2551281A1 CA 002551281 A CA002551281 A CA 002551281A CA 2551281 A CA2551281 A CA 2551281A CA 2551281 A1 CA2551281 A1 CA 2551281A1
Authority
CA
Canada
Prior art keywords
voice
auditory masking
musical tone
characteristic value
section
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002551281A
Other languages
French (fr)
Inventor
Kaoru Sato
Toshiyuki Morii
Tomofumi Yamanashi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Corp
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CA2551281A1 publication Critical patent/CA2551281A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/032Quantisation or dequantisation of spectral components
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/032Quantisation or dequantisation of spectral components
    • G10L19/038Vector quantisation, e.g. TwinVQ audio

Abstract

There is provided a voice/musical sound encoding device capable of performing a high-quality encoding by performing vector quantization by considering the human hearing characteristics. In this voice/musical sound encoding device, an orthogonal conversion unit (201) converts a voice/musical sound signal from a time component to a frequency component. A hearing masking characteristic value calculation unit (203) calculates a hearing masking characteristic value from the voice/musical sound signal. According to the hearing masking characteristic value, a vector quantization unit (202) performs vector quantization by changing the method for calculating the distance between the code vector obtained from a predetermined code book and the frequency component.

Description

DESCRIPTION
VOICE/MUSICAL SOUND ENCODING DEVICE AND VOICE/MUSICAL
SOUND ENCODING METHOD
Technical Field [0001] Thepresent invention relates to a voice/musical tonecoding apparatusandvoice/musicaltonecoding method that perform voice/musical tone signal transmission in a packet communication system typified by Internet communication,a mobilecommunicationsystem,orthelike.
Background Art
[0002] When a voice signal is transmitted in a packet communicationsystemtypified byInternetcommunication, a mobile communication system, or the like, compression and coding technology is used to increase transmission efficiency. To date, many voice coding methods have been developed, andmanyof thelowbitratevoicecodingmethods developed in recent years have a scheme in which a voice signalisseparatedintospectruminformation and detailed spectrum structure information, and compression and decoding is performed on the separated items.
[0003] Also, with the ongoing development of voice telephony environments on the Internet as typified by IP telephony, there is a growing need for technologies that efficiently compress and transfer voice signals.

2F04237~-PCT 2
[0004] In particular, various schemes relating to voice coding using human auditory masking characteristics are beingstudied. Auditory maskingisthephenomenon whereby, when there is a strong signal component contained in a particular frequency, an adjacent frequency component cannot be heard, and this characteristic is used to improve quality.
[0005] An example of a technology related to this is the method described in Non-Patent Literature 1 that uses auditory masking characteristics in vector quantization distance calculation.
[0006] The voice coding method using auditory masking characteristics in Patent Literature 1 is a calculation method whereby, when a frequency component of an input signal and a code vector shown by a codebook are both in an auditory masking area, the distance in vector quantization is taken to be 0.
Patent Document 1 : Japanese Patent Application Laid- Open No.HEI 8-123490 (p.3, FIG.1) Disclosure of Invention Problems to be Solved by the Invention
[0007] However, the conventional methodshown in Patent Literature 1 can only be adapted to cases with limited input signals and code vectors, and sound quality performance is inadequate.
[0008] Thepresentinvention hasbeenimplementedtaking ' CA 02551281 2006-06-22 into account the problems described above, and it is an object of the present invention to provide a high-quality voice/musical tone coding apparatus and voice/musical tone coding method that select a suitable code vector that minimizes degradation of a signal that has a large auditory effect.
Means for Solving the Problems
[0009] In order to solve the above problems, a voice/musical tone coding apparatus of the present invention has a configuration that includes : a quadrature transformation processing section that converts a voice/musical tone signal from time components to frequencycomponents;an auditory maskingcharacteristic value calculation section that finds an auditory masking characteristic value from the aforementioned voice/musical tone signal; and a vector quantization section that performs vector quantization changing an aforementioned frequency component and the calculation method of the distance between a code vector found from a preset codebook and the aforementioned frequency component based on the aforementioned auditory masking characteristic value.
Advantageous Effect of the Invention
[0010] Accordingtothepresentinvention,by performing quantization changing the method of calculating the 2F04237~-PCT 4 distance between an input signal and code vector based on anauditorymaskingcharacteristicvalue, it ispossible toselectasuitablecodevectorthatminimizesdegradation ..
of a signal that has a large auditory effect, and improve input signal reproducibility and obtain good decoded voice.
Brief Description of Drawings
[0011]
FIG. 1 is a block configuration diagram of an overall systemthatincludesavoice/musicaltonecodingapparatus and voice/musical tone decoding apparatus according to Embodiment 1 of the present invention;
FIG.2 is a block configuration diagram of a voice/musical tone coding apparatus according to Embodiment 1 of the present invention;
FIG . 3 is a block configuration diagram of an auditory masking characteristic value calculation section according to Embodiment 1 of the present invention;
FIG.4 is a drawing showing a sample configuration of critical bandwidths according to Embodiment 1 of the present invention;
FIG. 5 is a flowchart of avector quantization section according to Embodiment 1 of the present invention;
FIG. 6 is adrawingexplainingtherelativepositional relationship of auditory masking characteristic values, coding values, and MDCT coefficients according to Embodiment 1 of the present invention;
FIG.7 is a block configuration diagram of a voice/musical tone decoding apparatus according to Embodiment 1 of the present invention;
FIG.8 is a block configuration diagram of a voice/musical tone coding apparatus and voice/musical tone decoding apparatus according to Embodiment 2 of the present invention;
FIG. 9 is a schematic configuration diagram of a CELP
type voice coding apparatus according to Embodiment 2 of the present invention;
FIG.10 is a schematic configuration diagram of a CELPtypevoicedecoding apparatusaccordingtoEmbodiment 2 of the present invention;
FIG.11 is a block configuration diagram of an enhancementlayercodingsection accordingtoEmbodiment 2 of the present invention;
FIG.l2isaflowchartofavectorquantizationsection according to Embodiment 2 of the present invention;
FIG.13 is a drawing explaining the relative positional relationship of auditory masking characteristicvalues,codedvalues,andMDCTcoefficients according to Embodiment 2 of the present invention;
FIG. 14 is a block configuration diagram of a decoding section according to Embodiment 2 of thepresent invention;
FIG.15 is a block configuration diagram of a voice signaltransmitting apparatusand voicesignalreceiving apparatus according to Embodiment 3 of the present invention;
FIG.16 is a flowchart of a coding section according tolEmbodiment 1 of the present invention; and FIG.17 is a flowchart of an auditory masking value calculation section according to Embodiment 1 of the present invention.
Best Mode for Carrying Out the Invention
[0012] Embodiments of the present invention will now be described in detail below with reference to the accompanying drawings.
[0013] (Embodiment 1) FIG. 1 is a block diagram showing the configuration of an overall system that includes a voice/musical tone codingapparatusandvoice/musicaltonedecodingapparatus according to Embodiment 1 of the present invention.
[0014] This system is composed of voice/musical tone coding apparatus 101 that codes an input signal, transmissionchanne1103,and voice/musicaltonedecoding apparatus 105 that decodes [ 0015 ] Transmission channel 103 may be a wireless LAN, mobile terminal packet communication, Bluetooth, or suchlike radio communication channel, or may be an ADSL, FTTH, or suchlike cable communication channel.
[0016] Voice/musical tone coding apparatus 101 codes input signal 100, and outputs the result to transmission channel 103 as coded information 102.
[0017] Voice/musical tone decoding apparatus 105 receives coded information 102 via transmission channel 103, performs decoding, and outputs the result as output signal 106.
[0018] The configuration of voice/musical tone coding apparatus 101 will be described using the block diagram in FIG.2. Ln FIG.2, voice/musical tone coding apparatus 101 is mainly composed of: quadrature transformation processing section 201 that converts input signal 100 from time components to frequency components; auditory maskingcharacteristicvaluecalculationsection203that calculatesan auditory maskingcharacteristicvaluefrom input signal 100; shape codebook 204 that shows the correspondence between an index and a normalized code vector; gain codebook 205 that relates to each normalized code vector of shape codebook 204 and shows its gain;
and vector quantization section 202 that performs vector quantization of an input signal converted to the aforementioned frequency components using the aforementioned auditory masking characteristic value, and the aforementioned shape codebook and gain codebook.
[0019] The operation of voice/musical tone coding apparatus 101 will nowbe described in detail in accordance with the procedure in the flowchart in FIG.16.
[0020] First, input signal sampling processing will be described. Voice/musical tone coding apparatus 101 divides input signal 100 into sections of N samples (where N is a natural number), takes N samples as one frame, and performs coding on a frame-by-frame. Here, input signal 100 subject to coding will be represented as xn (n = 0, 1~, N-1) , where n indicates that this is the n+1'th of the signal elements comprising the aforementioned divided input signal.
[0021] Input signal x" 100 is input to quadrature transformationprocessingsection201andauditorymasking characteristic value calculation section 203.
[0022] Quadraturetransformation processingsection201 has internal buffers bufn (n - 0, 1~, N-1) for the aforementioned signal elements, and initializes these with 0 as the initial value by means of Equation (1).
[0023]
bufn =O~,A ,N-l~ [ Equation 1 ]
[0024] Quadrature transformation processing (step 51601 ) will nowbe describedwith regard to the calculation procedureinquadraturetransformationprocessingsection 201 and data output to an internal buffer.
[0025] Quadraturetransformation processingsection201 performs a modified discrete cosine transform (MDCT) on input signal xn 100, and finds MDCT coefficient Xk by means of Equation (2).
[0026]
2 Z"'-' ~2n+1+N~~2k+1~~
Xk =N ~x'"cos 4N ~ (k=O,A ,N-l~ [Equation 2 ]
n=0 [0027] Here, k signifies the index of each sample in oneframe. Quadraturetransformation processingsection 201 finds xn', which is a vector linking input signal xn 100 and buffer bufn, by means of Equation (3).
[0028]
buf ~n = O,A N -1~
x'n= ° [Equation 3 ]
xn_N ~n = N,A 2N -1~ ' [0029] Quadraturetransformation processingsection201 then updates buffer bufn by means of Equation (4).
[0030]
bufn =x" ~n=O,A N-1~ [Equation 4 ]
[0031] Next, quadrature transformation processing section 201 outputs MDCT coefficient Xk to vector quantization section 202.
[0032] The configuration of auditory masking characteristic value calculation section 203 in FIG.2 will now be described using the block diagram in FIG.3.
[0033] In FIG.3, auditory masking characteristic value calculation section 203 is composed of : Fourier transform section 301 that performs Fourier transform processing of an input signal; power spectrum calculation section 302 that calculates a power spectrum from the aforementionedFouriertransformedinputsignal;minimum audible threshold value calculation section 304 that calculates a minimum audible threshold value from an input signal; memorybuffer 305 that buffers the aforementioned calculated minimum audiblethreshold value; and auditory masking value calculation section 303 that calculates an auditory masking value from the aforementioned calculated powerspectrum andtheaforementioned buffered minimum audible threshold value.
[0034] Next, auditory masking characteristic value calculation processing (step 51602) in auditory masking characteristic value calculationsection203 configured as described above will be explained using the flowchart in FIG.17.
[0035] The auditory masking characteristic value calculationmethodisdisclosedinapaperbyMr.J.Johnston etal(J.Johnston,"Estimationofperceptualentropy using noise masking criteria", in Proc.ICASSP-88, May 1988, pp.2524-2527).
[ 0036 ] First, the operation of Fourier transform section 301 will be described with regard to Fourier transform processing (step 51701).
[0037] Fourier transform section 301 has input signal xn 100 as input, and converts this to a frequency domain signal Fk by means of Equation ( 5 ) . Here, a is the natural logarithm base, and k is the index of each sample in one frame.
[0038]
N _1 2 mEn Fk =~xne ~ N ~k=O,A ,N-1~ [Equation 5]
n=0 [0039] Fourier transform section 301 then outputs obtained Fk to power spectrum calculation section 302.

[0040] Next,powerspectrumcalculation processing(step 51702) will be described.
[0041] Power spectrum calculation section 302 has frequency domain signal Fk output from Fourier transform section 301 as input, and finds power spectrum Pk of Fk by means o f Equat i on ( 6 ) . Here , k i s the index o f each sample in one frame.
[0042]
Pk =~FkRe)2+~Fk'"'~2 ~k=O,A ,N-1~ [Equation 6 ]
[ 0043 ] In Equation ( 6 ) , FkRe is the real part of frequency domainsignalFk, andisfoundbypowerspectrumcalculation section 302 by means of Equation (7).
[0044]

LkRe -~ xncos~2~~~ ~k=O,A ,N-1~ [Equation 7 ]
n=0 [ 0045 ] Also, Fkzm is the imaginarypart of frequencydomain signal Fk, and is found by power spectrum calculation section 302 by means of Equation (8).
[0046]

Fk'm --~ xnsin~2~n~ ~k=O,A ,N-1~ [Equation 8 ]
n=o N
[0047] Power spectrum calculation section 302 then outputs obtained power spectrum Pk to auditory masking value calculation section 303.
[0048] Next,minimum audiblethresholdvaluecalculation processing (step 51703) will be described.
[0049] Minimum audible threshold value calculation section 304 finds minimum audible threshold value athk in the first frame only by means of Equation (9).
[0050]
I I , athk = 3.64(k/1000~~°~g - 6.5e W6(k/1000-3.3)2 + 10-3 ~k~1000~4 ~k = 0, A , N -1~
[Equation 9]
[0051] Next, memory buffer storage processing (step 51704) will be described.
[0052] Minimum audible threshold value calculation section 304 outputs minimum audible threshold value athk to memory buffer 305. Memory buffer 305 outputs input minimum audible threshold value athk to auditory masking valuecalculationsection303. Minimum audiblethreshold value athk is determined for each frequency component based on human hearing; and a component equal to or smaller than athk is not audible.
[0053] Next, the operation of auditory masking value calculation section 303 will be described with regard to auditory masking value calculation processing (step 51705).
[0054] Auditory masking value calculation section 303 haspowerspectrumPkoutputfrompowerspectrumcalculation section 302 as input, and divides power spectrum Pk into m critical bandwidths. Here, a critical bandwidth is a threshold bandwidth for which the amount by which a puretoneofthecenterfrequencyismaskeddoesnotincrease even if band noise is increased. FIG.4 shows a sample critical bandwidth configuration. In FIG.4, m is the total number of critical bandwidths, and power spectrum Pk is divided into m critical bandwidths . Also, i is the critical bandwidth index, and has a value from 0 to m-1.
Furthermore, bhi and bli are the minimum frequency index and maximum frequency index of each critical bandwidth I, respectively.
[0055] Next, auditory masking valuecalculationsection 303 has power spectrum Pk output from power spectrum calculation section 302 as input, and finds power spectrum Bi calculated for each critical bandwidth by means of Equation (10).
[0056]
nh.
B1 = ~Pk ~i=O,A ,m-l~ [Equation 10 ]
k=bl;
[0057] Auditory masking value calculation section 303 then f finds spreading function SF ( t ) by means of Equation (11).
Spreading function SF ( t ) is used to calculate, for each frequency component, the effect (simultaneous masking effect) that that frequency component has on adjacent frequencies.
[0058]
SF(t~=15.81139+7.5(t+0.474-17.5 1+(t+0.474~z (t=O,A , N, -1~
[Equation 11]
[0059] Here, Nt is a constant set beforehand within a range that satisfies the condition in Equation (12).
[0060]

2F04237~-PCT 14 0<-Nt <-m [Equation 12 ]
[0061] Next, auditory masking valuecalculationsection 3 03 f finds constant Ci using power spectrum Bi and spreading ,,,.
function SF ( t ) added for each critical bandwidth by means of Equation (13).
[0062]
N, Bi ~ SF(t~ (i < Nt t=NT _i N, Ci = ~Bi-SF(t~ (Nt <_i<-N-N,~ [Equation 13 ]
r=o N _i, ~Bi -SF(t~ (i > N-Nl t=o [0063] Auditory masking value calculation section 303 then f finds geometric mean uig by means of Equation ( 14 ) .
[ 0064 ]
g h~. A
to ~P.
k=hh;
nt; -bh;
,ui~ =10 (i=O,A ,m-l~ [Equation 14 ]
[0065] Auditory masking value calculation section 303 then finds arithmetic mean uia by means of Equation (15) .
[0066]
6l.
,ui"= ~Pk (bli-bhi~ (i=O,A ,m-1~ [Equation 15]
k=bh;
[0067] Auditory masking value calculation section 303 then finds SFMi (Spectral Flatness Measure) by means of Equation (16).
[0068]
SFM i = ,ui~ ~,ui° (i = 0, A , m -1~ [ E qu a t i o n 16 ]
[0069] Auditory masking value calculation section 303 then finds constant ai by means of Equation (17).
[0070]
~ log,o SFM;
a; =min -60 , 1 ~i=O,A ,m-1~ [Equation 17 ]
[0071] Auditory masking value calculation section 303 5 then finds offset value Oi for each critical bandwidth by means of Equation (18).
[0072]
O; =a; ~~14.5+i~+5.5~~1-a;~ (i=O,A ,m-l~ [Equation 18 ]
[0073] Auditory masking value calculation section 303 10 then finds auditory masking value Ti for each critical bandwidth by means of Equation (19).
[0074]
T = 10'°g~~(c;~(o;~'o)~~bl,-bh;~ ~i=O,A ,m-1~ [Equation 19]
[0075] Auditory masking value calculation section 303 then finds auditory masking characteristic value Mk from minimum audible threshold value athk output from memory buffer 305 by means of Equation (20) , and outputs this to vector quantization section 202.
[0076]
Mk=max~athk,T,.~ (k=bh;,A,bl; ,i=O,A,m-1~[Equation 20]
[0077] Next, codebook acquisition processing (step 51603) and vector quantization processing (step 51604) in vector quantization section 202 will be described in detail using the process flowchart in FIG.5.
[0078] Using shape codebook 204 and gain codebook 205, vector quantization section 202 performs vector quantization of MDCT coefficient Xk fromMDCT coefficient Xk output from quadrature transformation processing section 201 and an auditory masking characteristic value output from auditory masking characteristic value calculation section 203, and outputs obtained coded information 102 to transmission channel 103 in FIG.1.
[0079] The codebooks will now be described.
[0080]Shapecodebook204iscomposedofpreviouslycreated N~ kinds of N-dimensional code vectors codek' (j - 0, 1~, N~-1, k = 0, 11, N-1) , and gain codebook 205 is composed of previously created Nd kinds of gain codes gains (j -0 , 11, Nd-1 ) .
[0081] In step 501, initialization is performed by assigning 0 to code vector index j in shape codebook 204, and a sufficiently large value to minimum error DistMIN.
[ 0082 ] In step 502 , N-dimensional code vector codek' (k - 0, 11, N-1) is read from shape codebook 204.
[0083] In step 503, MDCT coefficient Xk output from quadraturetransformationprocessingsection201isinput, and gain Gain of code vector codek' (k = 0, 11, N-1) read in shape codebook 204 in step 502 is found by means of Equation (21).
[0084]

Gain = ~ X k ~ codek ~~ codek 2 [ E qu a t i o n 21 k=0 k=0 [ 0085 ] In step 504, 0 is assigned to calc_count indicating the number of executions of step 505.

[0086] Instep505,auditorymaskingcharacteristicvalue Mk output from auditory masking characteristic value calculation section 203 is input, and temporary gain temple (k - 0, h, N-1) is found by means of Equation (22).
[0087]
codek ~ ~dc~k ~ Gainl >- M k temple = ~,A ,N-1~ [Equation 22 ]
0 ~ ~c~k ~ Gainl < M k ~ , [0088] In Equation (22), if k satisfies the condition ~codek'~Gain~>-Mk, codek' is assigned to temporary gain temple, and if k satisfies the condition ~ codek' Gain ~ <Mk, 0 is assigned to temporary gain temple.
[ 0089 ] Then, in step 505, gain Gain for an element that is greater than or equal to the auditory masking value is found by means of Equation (23).
[0090]
Gain=~Xk -tempk~~tempkz ~k=O,A ,N-1~ [Equation 23 ]
k=0 k=0 [0091] If temporary gain temple is 0 for all k's, 0 is assigned to gain Gain. Also, coded value Rk is found from gain Gain and codek' by means of Equation (24).
[0092]
Rk =Gain~codek ~k=O,A ,N-1~ [Equation 24 ]
[0093] In step 506, calc_count is incremented by 1.
[0094] In step 507, calc_count and a predetermined non-negative integer N~ are compared, and the process flow returns to step 505 if calc_count is a smaller value than N~, or proceeds to step 508 if calc_count is greater 2F04237~-PCT 18 than or equal, to N~ . By repeatedly f finding gain Gain in this way, gain Gain can be converged to a suitable value.
[0095] In step 508, 0 is assigned to cumulative error Dist, and 0 is also assigned to sample index k.
[0096] Next, in steps 509, 511, 512, and 514, case determination is performed for the relative positional relationship between auditory masking characteristic value Mk, coded value Rk, and MDCT coefficient Xk, and distance calculation is performed in step 510, 513, 515, or 516 according to the case determination result.
[ 0097] This case determination according to the relative positional relationship is shown in FIG.6. In FIG.6, a white circle symbol (o) signifies an input signal MDCT
coefficient Xk, and a black circle symbol (~) signifies a coded value Rk. The items shown in FIG. 6 show the special characteristics of the present invention; and the area from the auditory masking characteristic value found by auditorymaskingcharacteristicvaluecalculationsection 203 +Mk to 0 to -Mk is referred to as the auditory masking area, and high-quality results closer in terms of the sense of hearing can be obtained changing the distance calculation method when input signal MDCT coefficient Xk or coded value Rk is present in this auditory masking area.
[0098] The distance calculation method in vector quantization according to the present invention will now bedescribed. When neitherinputsignalMDCTcoefficient Xk ( o ) nor coded value Rk ( ~ ) is present in the auditory masking area, and input signal MDCT coefficient Xk and coded value Rk are the same codes, as shown in "Case 1"
inFIG.6,distanceDllbetweeninputsignalMDCTcoefficient Xk ( o ) and coded value Rk ( ~ ) is simply calculated. When one of input signal MDCT coefficient Xk (o) and coded value Rk (~) is present in the auditory masking area, as shown in "Case 3" and "Case 4" in FIG.6, the position within the auditory masking area is corrected to an Mk value (or in some cases a -Mk value ) and D31 or D41 is calculated. When input signal MDCT coefficient Xk (o) and coded value Rk ( ~ ) straddle the auditory masking area, as shown in "Case 2" in FIG.6, the inter-auditory-masking-area distance is calculated as ~i~D23 (where ~ is an arbitrary coefficient) . When input signal MDCT coefficient Xk ( o ) and coded value Rk ( ~ ) are both present within the auditory masking area, as shown in "Case 5" in FIG.6, distance D51 is calculated as 0.
[0099] Next, processing in step 509 through step 517 for each of the cases will be described.
0100 ] In step 509 , whether or not the relativepositional relationship between auditory masking characteristic valueMk, codedvalueRk, andMDCTcoefficientXkcorresponds to"Case1"inFIG.6isdeterminedbymeansoftheconditional expression in Equation (25).
[0101]
~Xkl >_Mk~ and ~Rkl >_Mk~ and ~Xk ~Rk >_0~ [Equation 25 ]

2F04237~-PCT 20 [0102] Equation(25)signifiesacaseinwhichtheabsolute value of MDCT coefficient Xk and the absolute value of coded value Rk are both greater than or equal to auditory masking characteristic value Mk, and MDCT coefficient xk and coded value Rk are the same codes. If auditory masking characteristic value Mk, MDCT coefficient Xk, and coded value Rk satisfy the conditional expression in Equation (25), the process flow proceeds to step 510, and if they do not satisfy the conditional expression in Equation (25) , the process flow proceeds to step 511.
[ 0103 ] In step 510, error Distl between coded value Rk and MDCT coefficient Xk is found by means of Equation (26) , error Distl is added to cumulative error Dist, and the process flow proceeds to step 517.
[0104]
Dist~ = D~ 1 ~@~~k~Rk [Equation 26]
[0105] Instep511, whetherornottherelativepositional relationship between auditory masking characteristic valueMk, codedvalueRk, andMDCTcoefficientXkcorresponds to"Case5"inFIG.6isdeterminedbymeansoftheconditional expression in Equation (27).
[0106]
~Xkl >_Mk~ and ~Rkl >_Mk~ and ~Xk -Rk <0~ [Equation 27 ]
[0107] Equation(27)signifiesacaseinwhichtheabsolute value of MDCT coefficient Xk and the absolute value of coded value Rk are both less than or equal to auditory masking characteristic value Mk. If auditory masking characteristic value Mk, MDCT coefficient Xk, and coded value Rk satisfy the conditional expression in Equation ( 27 ) , the error between coded value Rk andMDCT coef f icient Xk is taken to be 0, nothing is added to cumulative error Dist, and the process flow proceeds to step 517, whereas if they do not satisfy the conditional expression in Equation (27), the process flow proceeds to step 512.
[0108] Instep512, whether or not the relativepositional relationship between auditory masking characteristic valueMk, codedvalueRk, andMDCTcoefficientXkcorresponds to"Case2"inFIG.6isdeterminedbymeansoftheconditional expression in Equation (28).
[0109]
Dist2 = D2~ + DZZ + ~3 ~ D23 [ E qu a t i o n 2 8 ]
[0110] Equation (28) signifiesacaseinwhichtheabsolute value of MDCT coefficient Xk and the absolute value of coded value Rk are both greater than or equal to auditory masking characteristic value Mk, and MDCT coefficient Xk and coded value Rk are di f f erent codes . I f audi tory masking characteristic value Mk, MDCT coefficient Xk, and coded value Rk satisfy the conditional expression in Equation (28), the process flow proceeds to step 513, and if they do not satisfy the conditional expression in Equation (28) , the process flow proceeds to step 514.
[ 0111 ] In step 513 , error Dist2 between coded value Rk and MDCT coefficient Xk is found by means of Equation 2F04237~-PCT 22 (29) , error Distz is added to cumulative error Dist, and the process flow proceeds to step 517.
[0112]
D21 =IXkI -Mk [Equation 29 ]
[0113] Here, ~ is value set as appropriate according toMDCTcoefficientXk, coded valueRk, andauditorymasking characteristic value Mk. A value of 1 or less is suitable for ~i , and a numeric value found experimentally by subj ect evaluation may be used. D21, Dzz, and Dz3 are found by means of Equation ( 3 0 ) , Equation ( 31 ) , and Equation ( 32 ) , respectively.
[0114]
D22=~Rk~-Mk [Equation 30]
[0115]
D23=Mk ~2 [Equation 31]
[0116]
~Xkl >_Mk~ and ~Rk <Mk~ [Equation 32 ]
[ 0117 ] In step 514, whether or not the relativepositional relationship between auditory masking characteristic valueMk, codedvalueRk, andMDCTcoefficientXkcorresponds to"Case3"inFIG.6isdeterminedbymeansoftheconditional expression in Equation (33).
[0118]
Dist3 = D3, ~@~~k~MA [Equation 33]
[0119] Equation(33)signifiesacaseinwhichtheabsolute value of MDCT coefficient Xk is greater than or equal to auditory masking characteristic value Mk, and coded value Rk is less than auditory masking characteristic value Mk. If auditory masking characteristic value Mk, MDCT coefficient Xk, and coded value Rk satisfy the conditional expression in Equation ( 33 ) , the process flow proceeds to step 515, and if they do not satisfy the conditional expression in Equation ( 33 ) , the process flow proceeds to step 516.
[ 012 0 ] In step 515 , error Di s t3 between coded value Rk and MDCT coefficient Xk is found by means of Equation (34) , error Dist3 is added to cumulative error Dist, and the process flow proceeds to step 517.
[0121]
~Xkl<Mk~ and ~Rkl>_Mk~ [Equation 34]
[ 0122 ] In step 516, the relative positional relationship between auditory masking characteristic value Mk, coded value Rk, and MDCT coefficient Xk corresponds to "Case 4" in FIG.6, and the conditional expression in Equation (35) is satisfied.
[ 0123 ]
~Xkl <Mk~ and ~Rkl<Mk~ [Equation 35 ]
[0124] Equation(35)signifiesacaseinwhichtheabsolute value of MDCT coefficient Xk is less than auditorymasking characteristic value Mk, and coded value Rk is greater than or equal to auditory masking characteristic value Mk. In step 516, error Dist4 between coded value Rk and MDCT coefficient Xk is found by means of Equation (36) , error Dist4 is added to cumulative error Dist, and the process flow proceeds to step 517.
[ 012 5 ]
DlSt4 = D41 [Equation 36]
k [0126] In step 517, k is incremented by 1.
[0127] In step 518, N and k are compared, and if k is a smaller value than N, the process flow returns to step 509 . If khas the same value as N, the process flowproceeds to step 519.
[0128] In step 519, cumulative error Dist and minimum error DistMZN are compared, and if cumulative error Dist is a smaller value than minimum error DistMZN, the process flow proceeds to step 520, whereas if cumulative error Dist is greater than or equal to minimum error DistMIN, the process flow proceeds to step 521.
[0129] In step 520, cumulative error Dist is assigned to minimum error DistMIN, j is assigned to code_indexMIN, and gain Gain is assigned to error minimum gain DistMIN, and the process flow proceeds to step 521.
[0130] In step 521, j is incremented by 1.
[ 0131 ] In step 522 , total number of vectors N~ and j are compared, and if j is a smaller value than N~, the process flow returns to step 502. If j is greater than or equal to N~, the process flow proceeds to step 523.
[0132] In step 523, Nd kinds of gain code gains (d = 0, 11, Nd-1) are read from gain codebook 205, and quantization gain error gainerrd (d = 0, n, Nd-1) is found by means of Equation (37) for all d's.
[0133]
gainerr~ = I GainM,N - gains I ~ ,A , Nd -1~ [ E qua t i on 3 7 ]
[0134] Then, in step 523, d for which quantization gain error gainerrd (d = 0, 11, Nd-1) is a minimum is found, and the found d is assigned to~ gain_indexMIN.
[0135] In step 524, code_indexMIN that is the code vector index for which cumulative error Dist is a minimum, and gain_indexMIN found in step 523 , are output to transmission channel 103 in FIG.1 as coded information 102, and processing is terminated.
[0136] This completes the description of coding section 101 processing.
[0137] Next, voice/musical tone decoding apparatus 105 in FIG. 1 will be described using the detailed block diagram in FIG.7.
[0138] Shape codebook 204 and gain codebook 205 are the same as those shown in FIG.2.
[ 0139 ] Vector decoding section 701 has coded information 102 transmitted via transmission channel 103 as input, and using code_indexMIN and gain_indexMIN as the coded information, reads code vector COdeko°de_indexMIN (k = 0, 1~, N-1) from shape codebook 204, and also reads gain code gaingain_inaexMIN from gain codebook 205. Then vector decodingsection701multipliesgalngaln_indexMINbycodek~ode-indexMIN
- ( k= 0 , 11, N-1 ) , andOUtputs galllgaln_indexMIN X COdekoode-2F04237~-PCT 26 -indexMIN (k - 0, 1~, N-1) obtained as a result of the multiplication to quadrature transformation processing section 702 as a decoded MDCT coefficient.
..
[0140] Quadraturetransformation processingsection702 has an internal buffer bufk' , and initializes this buffer in accordance with Equation (38).
[0141]
bufk=OD,A,N-1~[Equation 38]
[0142] Next, decoded MDCT coefflclent galngaln_indexMIN
COdekoode_indexMIN (k-0, 11, N-1) output fromMDCTcoefficient decoding section 701 is input, and decoded signal Yn is found by means of Equation (39).
[0143]
2 2N-1 ~~2n+1+N~(2k+l~n yn=N ~X'kcos 4N ~ ~n=O,A ,N-1~ [Equation 39]
k=0 [0144] Here, Xk~ is a vector linking decoded MDCT
COefflClent gaingain_inaexMIN X COdekoode_indexMIN (k =
, , N-1) and buffer bufk~ , and is found by means of Equation (40) .
[0145]
X,k = buf'k ~k =O,A N-1~
[Equation 40]
gainK"r"_rne~aM,N , COd2kn~_~ndexM,N ~k = N,A 2N -1~
[0146] Buffer bufk~ is then updated by means of Equation (41) .
[0147]
buf'k=galngp'n-~ndeaM,N.codek°~e_""''~""" ~k=O,A N-1~ [Equation 41]
[ 0148 ] Decoded signal Yn is then output as output signal 106.
[0149] By thus providing a quadrature transformation processing section that finds an input signal MDCT
coefficient, an auditory masking characteristic value calculation section that finds an auditory masking characteristic value, and a vector quantization section that performs vector quantization using an auditory masking characteristic value, and performing vector quantization distance calculation according to the relative positional relationship between an auditory masking characteristic value, MDCT coefficient, and quantized MDCT coefficient, it is possible to select a sui table code vector that minimi zes degradat i on o f a signal that has a large auditory effect, and to obtain a high-quality output signal.
[0150] It is also possible to perform quantization in vector quantization section 202 by applying acoustic weighting filters for the distance calculations in above-described Case 1 through Case 5.
[ 0151 ] Also, in this embodiment, a case has been described in which MDCT coefficient coding is performed, but the present invention can also be applied, and the same kind of actions and effects can be obtained, in a case in which post-transformationsignal(frequency parameter)coding is performed using Fourier transform, discrete cosine transform (DCT), or quadrature mirror filter (QMF) or suchlike quadrature transformation.

[ 0152 ] Furthermore, in this embodiment, a case has been described in which coding is performed by means of vector quantization, but there are no restrictions on the coding ..
metfhod in the present invention, and , for example, coding may also be performed by means of divided vector quantization or multi-stage vector quantization.
[ 0153 ] It is also possible for voice/musical tone coding apparatus 101 to have the procedure shown in the flowchart in FIG.16 executed by a computer by means of a program.
[0154] As described above, by calculating an auditory masking characteristic value from an inpu t signal, considering allrelativepositionalrelationshipsofMDCT
coefficient, coded value, and auditory masking characteristicvalue,and applying a distancecalculation method suited to human hearing, it is possible to select a suitable code vector that minimizees degradation of a signal that has a large auditory effect, and to obtain good decoded voice even when an input signal is decoded at a low bit rate.
[0155] In Patent Literature 1, only "Case 5" in FIG.6 is disclosed, but with the present invention, in addition to this, by employing a distance calculation method that takes an auditory masking characteristic value into consideration for all combinations of relationships as shown in "Case 2, " "Case 3, " and "Case 4, " considering all relative positional relationships of input signal MDCT coefficient, coded value, and auditory masking characteristicvalue,and applying a distancecalculation method suited to hearing, it is possible to obtain higher-quality coded voice even when an input signal is quantized at a low bit rate.
[ 0156 ] Also, the present invention is based on the fact that actual audibility differs if distance calculation is performed without change and vector quantization is then performed when an input signal MDCT coefficient or coded value is present within the auditory masking area, and when present on either side of the auditory masking area, and therefore more natural audibilitycanbeprovided changingthedistancecalculation method when performing vector quantization.
[0157] (Embodiment 2) In Embodiment 2 of the present invention, an example is described in which vector quantization using the auditory masking characteristic values described in Embodiment 1 is applied to scalable coding.
[0158] In this embodiment, a case is described below in which, in a two-layer voice coding and decoding method composed of a base layer and enhancement layer, vector quantization is performed using auditory masking characteristic value in the enhancement layer.
0159 ] A scalable voice coding method is a method whereby a voice signal is split into a plurality of layers based on frequency characteristics and coding is performed.
Specifically, signals of each layer are calculated using a residual signal representing the difference between a lower layer input signal and a lower layer output signal .
On the decoding side, the signals of these layers are .,., addedandavoicesignalisdecoded. Thistechniqueenables sound quality to be controlled flexibly, and also makes noise-tolerant voice signal transfer possible.
[0160] In this embodiment, a case in which the base layer performs CELP type voice coding and decoding will be described as an example.
[ 0161 ] FIG. 8 is a block diagram showing the configuration of a coding apparatus and decoding apparatus that use an MDCTcoefficientvectorquantization method according to Embodiment 2 of the present invention. In FIG.8, the coding apparatus is composed of base layer coding section 801, base layer decoding section 803, and enhancement layer coding section 805, and the decoding apparatus is composed of base layer decoding section 808, enhancement layer decoding section 810, and adding section 812.
0162 ] Base layer coding section 801 codes an input signal 800 using a CELP type voice coding method, calculates base layer coded information 802, and outputs this to base layer decoding section 803 , and to base layer decoding section 808 via transmission channel 807.
[ 0163 ] Base layer decoding section 803 decodes base layer coded information 802 using a CELP type voice decoding method, calculates base layer decoded signal 804, and outputs this to enhancement layer coding section 805.

[0164] Enhancement layer coding section 805 has base layer decoded signal 804 output by base layer decoding section 803, and input signal 800, as input, codes the residual signal of input signal 800 and base layer decoded signa1804bymeansofvectorquantizationusinganauditory masking characteristic value, and outputs enhancement layer coded information 806 found by means of quantization toenhancementlayer decodingsection810viatransmission channe1807. Detailsofenhancementlayercodingsection 805 will be given later herein.
[ 0165 ] Base layer decoding section 808 decodes base layer coded information 802 using a CELP type voice decoding method, and outputs a base layer decoded signal 809 found by decoding to adding section 812.
[0166] Enhancement layer decoding section 810 decodes enhancement layer coded information 806, and outputs enhancement layer decoded signal 811 found by decoding to adding section 812.
[0167] Addingsection812addstogetherbaselayer decoded signal 809 output from base layer decoding section 808 and enhancement layer decoded signal 811 output from enhancement layer decoding section 810, and outputs the voice/musical tone signal that is the addition result as output signal 813.
[0168] Next, base layer coding section 801 will be described using the block diagram in FIG.9.
[0169] Input signal 800 of base layer coding section 801 is input to a preprocessing section 901. Preprocessing section 901 performs high pass filter processing that removes a DC component, and waveform shaping processing and pre-emphasis processing aiming at performance improvement of subsequent coding processing, and outputs the signal (Xin) that has undergone this processing to LPC analysis section 902 and adding section 905.
[0170] LPC analysis section 902 performs linear prediction analysis using Xin, and outputs the analysis result(linearpredictioncoefficient)toLPCquantization section 903. LPC quantization section 903 performs quantization processing of the linear prediction coefficient (LPC) output from LPC analysis section 902, outputs the quantized LPC to combining filter 904, and also outputs a code (L) indicating the quantized LPC to multiplexing section 914.
[0171] Using a filter coefficient based on the quantized LPC, combining filter 904 generates a composite signal by performing filter combining on a drive sound source output from an adding section 911 described later herein, and outputs the composite signal to adding section 905.
[0172] Adding section 905 calculates an error signal by inverting the polarity of the composite signal and adding it to Xin, and outputs the error signal to acoustic weighting section 912.
[ 0173 ] Adaptive sound source codebook 906 stores a drive sound source output by adding section 911 in a buffer, extracts one frame's worth of samples from a past drive sound source specified by a signal output from parameter determination section 913 as an adaptive sound source vector, and outputs this to multiplication section 909.
[0174] Quantization gain generationsection907outputs quantization adaptive sound source gain specified by a signal output from parameter determination section 913 and quantizationfixedsoundsourcegaintomultiplication section 909 and a multiplication section 910, respectively.
[0175] Fixed sound sourcecodebook908multipliesapulse sound source vector having a form specified by a signal output from parameter determination section 913 by a spreading vector, and outputs the obtained fixed sound source vector to multiplication section 910.
[0176] Multiplication section 909 multiplies quantization adaptive sound source gain output from quantization gain generation section 907 by the adaptive sound source vector output from adaptive sound source codebook 906, and outputs the result to adding section 911. Multiplication section 910 multiplies the quantization fixed sound source gain output from quantization gain generation section 907 by the fixed soundsourcevectoroutputfromfixedsoundsourcecodebook 908, and outputs the result to adding section 911.
[0177) Adding section 911 has as input the post-gain-multiplication adaptive sound source vector and fixed sound source vector from multiplication section 909 and multiplication section 910 respectively, and outputs the drive sound source that is the addition result to combining filter 904 and adaptive sound source codebook 906 . The drive sound source input to adaptive sound source codebook 906 is stored in a buffer.
[0178] Acousticweightingsection912performsacoustic weighting on the error signal output from adding section 905, and outputs the result to parameter determination section 913 as coding distortion.
[0179] Parameterdeterminationsection913selectsfrom adaptive sound source codebook 906, fixed sound source codebook 908, and quantization gain generation section 907, the adaptive sound source vector, fixed sound source vector, and quantization gain that minimize coding distortion output from acoustic weighting section 912, and outputs an adaptive sound source vector code (A), sound source gain code (G) , and fixed sound source vector code (F) indicating the selection results to multiplexing section 914.
[ 0180 ] Multiplexing section 914 has a code (L) indicating quantized LPC as input from LPC quantization section 903 , and code (A) indicating an adaptive sound source vector, code (F) indicating a fixed sound source vector, and code (G) indicating quantization gain as input from parameter determinationsection913,multiplexesthisinformation, and outputs the result as base layer coded information 802.
[0181] Base layer decoding section 803 (808) will now be described using FIG.10.
[0182] In FIG.10, base layer coded information 802 input to base layer decoding section 803 (808) is separated into individual codes (L, A, G, F) bydemultiplexingsection 1001. Separated LPC code (L) is output to LPC decoding section 1002 , separated adaptive sound source vector code (A) is output to adaptive sound source codebook 1005, separated sound source gain code (G) is output to quantization gain generationsection1006, andseparated fixed sound source vector code (F) is output to fixed sound source codebook 1007.
(0183] LPC decoding section 1002 decodes a quantized LPC from code (L) output fromdemultiplexingsection1001, and outputs the result to combining filter 1003.
[0184] Adaptive soundsource codebook 1005 extracts one frame' s worth of samples from a past drive sound source designated by code (A) output from demultiplexing section 1001 as an adaptive sound source vector, and outputs this to multiplication section 1008.
(0185] Quantization gain generationsection1106decodes quantization adaptivesoundsourcegain and quantization fixed sound source gain designated by sound source gain code (G) output from demultiplexing section 1001, and outputs this to multiplication section 1008 and multiplication section 1009.

0186 ] Fixed sound source codebook 1007 generates a fixed sound source vector designated by code (F) output from demultiplexing section 1001, and outputs this to muhtiplication section 1009.
[0187] Multiplication section 1008 multiplies the adaptive sound source vector by the quantization adaptive sound source gain, and outputs the result to adding section 1010. Multiplication section 1009 multiplies the fixed sound source vector by the quanti zation f fixed sound source gain, and outputs the result to adding section 1010.
[0188] Adding section 1010 performs addition of the post-gain-multiplication adaptive sound source vector and fixed sound source vector output from multiplication section 1008 and multiplication section 1009, generates a drive sound source, and outputs this to combining filter 1003 and adaptive sound source codebook 1005.
[0189] Using the filter coefficient decoded by LPC
decoding section 1002, combining filter 1003 performs filter combining of the drive sound source output from adding section 1010, and outputs the combined signal to postprocessing section 1004.
[0190] Postprocessing section 1004 executes, on the signal output from combining filter 1003 , processing that improvesthesubjectivevoicesound qualitysuchasformant emphasis and pitch emphasis, processing that improves the subjective sound quality of stationary noise, and so forth, and outputs the resulting signal as base layer decoded signal 804 (810).
[0191] Enhancement layer coding section 805 will now be described using FIG.11.
[0192] Enhancement layer coding section 805 in FIG.11 is similar to that shown in FIG. 2 , except that differential signal 1102 of base layer decoded signal 804 and input signal 800 is input to quadrature transformation processing section 1103, and auditory masking characteristicvaluecalculationsection203isassigned the same code as in FIG.2 and is not described here.
[0193] As with coding section 101 of Embodiment 1, enhancement layer coding section 805 divides input signal 800 into sections of Nsamples (whereNis anatural number) , takes N samples as one frame, and performs coding on a frame-by-frame basis. Here, input signal 800 subject to coding will be designated x" (n - 0, 11, N-1).
[ 0194 ] Input signal xn 800 is input to auditory masking characteristic value calculation section 203 and adding section 1101 . Also, base layer decoded signal 804 output from base layer decoding section 803 is input to adding section 1101 and quadrature transformation processing section 1103.
[0195] Adding section 1101 finds residual signal 1102 xresidn (n = 0, 11, N-1) by means of Equation (42), and outputs residual signal 1102 xresidn to quadrature transformation processing section 1103.
[0196]

2F04237~-PCT 38 xresidn =xn -xbasen~,A ,N-1~ [Equation 42 ]
[ 0197 ] Here, xbasen (n = 0 , 11, N-1 ) is base layer decoded signal 804. Next, the process performed by quadrature transformation processingsection1103willbedescribed.
[0198] Quadrature transformation processing section 1103 has internal buffers bufbasen (n = 0, 11, N-1) used in base layer decoded signal xbasen 804 processing, and bufresid" (n = 0, 11, N-1) used in residual signal xresidn 1102 processing, and initializes these buffers by means of Equation (43) and Equation (44) respectively.
[0199]
bufbasen =O~,A ,N-1~ [Equation 43 ]
[0200]
bufresidn =O~,A ,N-1~ [Equation 44 ]
[0201] Quadrature transformation processing section 1103 then finds base layer quadrature transformation coefficient xbasek 1104 and residual quadrature transformation coefficient xresidk 1105 by performing a modified discrete cosine transform (MDCT) on base layer decoded signal xbasen 804 and residual signal xresidn 1102 , respectively. Base layer quadrature transformation coefficient xbasek 1104 here is found by means of Equation (45) .
[0202]
2 Z"-' ~2n + 1 + N~~2k + l~~c 2 5 Xbasek = - ~ xbase'n cos 4N ~k = 0, A , N - l~
N n-o [Equation 45]

[0203] Here, xbasen' isavectorlinkingbaselayerdecoded signal xbasen 804 and buffer bufbasen, and quadrature transformation processing section 1103 finds xbasen' by means of Equation (46) .Also, k is the index of each sample in one frame.
[0204]
bufbasen ~n = O,A N - l~
xbasen= [Equation 46]
xbasen_N ~n = N,A 2N -1~
[0205] Next, quadrature transformation processing section 1103 updates buffer bufbase" by means of Equation (47) .
[0206]
buf basen = xbasen ~n = 0, A N -1~ [ E qu a t i o n 4 7 ]
[0207] Also, quadrature transformation processing section 1103 finds residual quadrature transformation coefficient xresidk 1105 by means of Equation (48).
[0208]
Xresid k = ~ Z~~xresid'n cosC ~2n + 1 + N~(2k + 1~~~ ~k - 0, A , N -1~
n= J0 [Equation 48]
[ 0209 ] Here, xresidn' is a vector linking residual signal xresidn 1102 and buffer bufresidn, and quadrature transformation processing section 1103 finds xresidn' by means of Equation (49) . Also, k is the index of each sample in one frame.
[ 0210 ]

bufresid ~n = O,A N -1~
xresid'n= " [Equation 49]
xresidn_N ~n = N,A 2N -1~
[0211],, Next, quadrature transformation processing section 1103 updates bufferbufresidnbymeans of Equation (50) .
[0212]
bufresidn =xresidn ~n=O,A N-1~ [Equation 50 ]
[0213] Quadrature transformation processing section 1103 then outputs base layer quadrature transformation coefficient Xbasek 1104 and residual quadrature transformation coefficient Xresidk 1105 to vector quantization section 1106.
[0214] Vector quantization section 1106 has, as input, base layer quadrature transformation coefficient Xbasek 1104 and residual quadrature transformation coefficient Xresidk 1105 from quadrature transformation processing section 1103, and auditory masking characteristic value Mk 1107 from auditory masking characteristic value calculation section 203, and using shape codebook 1108 and gain codebook 1109, performs coding of residual quadrature transformation coefficient Xresidk 1105 by means of vector quantization using the auditory masking characteristicvalue,andoutputsenhancementlayercoded information 806 obtained by coding.
[0215] Here,shapecodebook1108iscomposedofpreviously createdNe kinds of N-dimensional code vectors coderesidke (e = 0, 1~, Ne-1, k = 0, 1~, N-1) , and is used when performing vectorquantizationofresidualquadraturetransformation coefficient Xresidk 1105 in vector quantization section 1106.
[ 0216 ] Also, gain codebook 1109 is composed of previously created Nf kinds of residual gain codes gainresidf ( f _ 0, 11, Nf-1) , andisusedwhenperformingvectorquantization ofresidualquadraturetransformationcoefficientXresidk 1105 in vector quantization section 1106.
[0217] The process performed by vector quantization section 1106 will now be described in detail using FIG. 12 .
In step 1201, initialization is performed by assigning 0 to code vector index a in shape codebook 1108, and a sufficiently large value to minimum error DistMZN.
[0218] In step 1202, N-dimensional code vector coderesidke (k = 0, 11, N-1) is read from shape codebook 1108.
[0219] Instep1203, residual quadraturetransformation coefficientXresidkoutputfrom quadraturetransformation processing section 1103 is input, and gain Gainresid of code vector coderesidke (k = 0, 1~, N-1) read in step 1202 is found by means of Equation (51).
[0220]

Gainresid = ~ Xresidk ~ coderesidk ~~ coderesidk Z [ Equa t i on 51 ]
k=0 k=0 [0221] In step 1204, 0 is assigned to calc_countresid indicating the number of executions of step 1205.
[0222] In step 1205, auditory masking characteristic 2F04237~-PCT 42 valueMkoutputfrom auditory maskingcharacteristicvalue calculationsection203 is input, and temporary gaintemp2k (k - 0, 1~, N-1) is found by means of Equation (52).
[02123]
coderesidk ~ ~fd~residk ~ Gainresid + Xbasek I >- Mk temp2k = D ,A , N -1~
0 ~ ~fd~residk ~ Gainresid + Xbasek I < Mk [Equation 52]
[0224] In Equation (52), if k satisfies the condition coderesidke ~ Gainresid+Xbasek ( >-Mk, coderesidke is assigned to temporary gain temp2k, and if k satisfies the condition ~ coderesidke ~ Gainresid+Xbasek ~ <Mk, 0 is assigned to temp2k. Here, k is the index of each sample in one frame.
[0225] Then, in step 1205, gain Gainresid is found by means of Equation (53).
[0226]

Gainresid=~Xresidk ~temp2k~~temp2k2 ~k=O,A ,N-1~ [Equation 53 ]
k=0 k=0 [0227] If temporary gain temp2k is 0 for all k's, 0 is assigned to gain Gainresid. Also, residual coded value Rresidk is found from gain Gainresid and code vector coderesidke by means of Equation (54).
[0228]
Rresidk = Gainresid ~ coderesidk ~k = O,A , N -1~ [ E qu a t i on 5 4 ]
[0229] Also, addition coded value Rplusk is found from residual coded value Rresidk and base layer quadrature transformation coefficient Xbasek by means of Equation(55).
[0230]
Rplusk = Rresidk + Xbasek ~k = O,A , N -1~ [ E qu a t i o n 5 5 ]
[0231] In step 1206, calc_countresia is incremented by 1.
[0232] In step 1207, calc_countresia arid a predetermined non-negative integerNresid~arecompared, and the process flow returns to step 1205 if calc_countresia is a smaller value than Nresid~, or proceeds to step 1208 if calc_countresia is greater than or equal to Nresid~.
[0233] In step 1208, 0 is assigned to cumulative error Distresid, and 0 is also assigned to sample index k. Also, in step 1208, addition MDCT coefficient Xplusk is found by means of Equation (56).
[0234]
Xplusk = Xbasek + Xresidk ~k = 0, A , N -1~ [ E qu a t i o n 5 6 ]
[0235] Next, in steps 1209, 1211, 1212, and 1214, case determination is performed for the relative positional relationship between auditory masking characteristic value Mk 1107, addition coded value Rplusk , and addition MDCT coefficient Xplusk, and distance calculation is performed in step 1210, 1213, 1215, or 1216 according tothecasedeterminationresult. Thiscasedetermination accordingtotherelativepositionalrelationshipisshown in FIG. 13 . In FIG. 13, a white circle symbol (o) signifies an addition MDCT coefficient Xplusk, and a black circle symbol (~) signifies an addition coded value Rplusk. The concepts in FIG,.13 are the same as explained for FIG.6 in Embodiment 1.
[0236] Instep1209,whetherornottherelativepositional rehationship between auditory masking characteristic value Mk, addition coded value Rplusk, and addition MDCT
coefficient Xplusk corresponds to "Case 1" in FIG.13 is determined by means of the conditional expression in Equation (57).
[0237]
~Xplusk I >_ Mk ~ and ~Rplusk I >_ Mk ~ and ~Xplusk ~ Rplusk >_ 0 [Equation 57]
[0238] Equation(57)signifiesacaseinwhichtheabsolute value of additionMDCT coefficient Xplusk and the absolute value of addition coded value Rplusk are both greater than or equal to auditory masking characteristic value Mk, and additionMDCTcoefficientXpluskandadditioncoded value Rplusk are the same codes. If auditory masking characteristicvalueMk, additionMDCTcoefficientXplusk, and addition coded value Rplusk satisfy the conditional expression in Equation (57) , the process flow proceeds to step 1210, and if they do not satisfy the conditional expression in Equation (57) , the process flow proceeds to step 1211.
[0239] In step 1210, error Distresidl between Rplusk and addition MDCT coefficient Xplusk is found by means of Equation (58), error Distresidl is added to cumulative error Distresid, and the process flow proceeds to step 1217.
[0240]
Distresid, = Dresid"
D@0@~~~X~sidk -Rresidkl [Equation 58 ]
[0241] Instep1211,whetherornottherelativepositional relationship between auditory masking characteristic value Mk, addition coded value Rplusk, and addition MDCT
coefficient Xplusk corresponds to "Case 5" in FIG.13 is determined by means of the conditional expression in Equation (59).
[ 0242 ]
~Xplusk < Mk ~ and ~Rplusk < Mk ~ [ Equa t i on 5 9 ]
[0243] Equation (59) signifiesacaseinwhichtheabsolute value of additionMDCT coefficient Xplusk and the absolute value of addition coded value Rplusk are both less than auditory masking characteristic value Mk. If auditory masking characteristic value Mk, addition coded value Rplusk, and addition MDCT coefficient Xplusk satisfy the conditionalexpressioninEquation(59),theerror between addition coded value Rplusk and addition MDCT coefficient Xplusk is taken to be 0, nothing is added to cumulative error Distresid, and the process flow proceeds to step 1217. If auditory masking characteristic value Mk, addition coded valueRplusk, and additionMDCTcoefficient XpluskdonotsatisfytheconditionalexpressioninEquation (59), the process flow proceeds to step 1212.
[0244] Instep1212,whetherornottherelativepositional 2F04237~-PCT 46 relationship, between auditory masking characteristic value Mk, addition coded value Rplusk, and addition MDCT
coefficient Xplusk corresponds to "Case 2" in FIG.13 is determined by means of the conditional expression in Equation (60).
[0245]
uXpLusk ( >_ Mk ~ and ~Rp~usk >_ Mk ~ and ~Xplusk ~ Rplusk < 0 [Equation 60]
[0246] Equation (60) signifiesacaseinwhichtheabsolute value~of addition MDCT coefficient Xplusk and the absolute value of addition coded value Rplusk are both greater than or equal to auditory masking characteristic value Mk, and additionMDCTcoefficientXpluskandadditioncoded value Rplusk are different codes. If auditory masking characteristicvalueMk, additionMDCTcoefficientXplusk, and addition coded value Rplusk satisfy the conditional expression in Equation ( 60 ) , the process flow proceeds to step 1213, and if they do not satisfy the conditional expression in Equation ( 60 ) , the process flow proceeds to step 1214.
[ 0247 ] In step 1213 , error Distresid2 between addition coded value Rplusk and addition MDCT coefficient Xplusk is found by means of Equation ( 61 ) , error Distresid2 is added to cumulative error Distresid, and the process flow proceeds to step 1217.
[0248]

Distresid2 = Dresidz, + Dresid22 + ~resia * Dresid23 [ Equa t i on 61 ]
[ 0249 ] Here, ~resid 1.S a Value set as appropriate according to addition MDCT coefficient Xplusk, addition coded value Rplusk, and auditory masking characteristic value Mk. A
valueof 1 or less is suitable for ~resid. Dresidzl, Dresidzz, and Dresidz3 are found by means of Equation ( 62 ) , Equation (63), and Equation (64), respectively.
[0250]
Dresid2l =IXpluskl -Mk [Equation 62 ]
[ 0251 ]
Dresid22 =IRpluskl -Mk [Equation 63 ]
[0252]
DresidZ3=Mk ~2 [Equation 64]
[0253] Instep1214,whetherornottherelativepositional relationship between auditory masking characteristic value Mk, addition coded value Rplusk, and addition MDCT
coefficient Xplusk corresponds to "Case 3" in FIG.13 is determined by means of the conditional expression in Equation (65).
[0254]
uXpluskl >-Mk~ and ~Rplusk <Mk~ [Equation 65]
[0255] Equation(65)signifiesacaseinwhichtheabsolute value of addition MDCT coefficient Xplusk is greater than or equal to auditory masking characteristic value Mk, and addition coded value Rplusk is less than auditory masking characteristic value Mk. If auditory masking characteristicvalueMk, additionMDCTcoefficientxplusk, 2F04237~-PCT 48 and addition coded value Rplusk satisfy the conditional expression in Equation ( 65 ) , the process flow proceeds to step 1215, and if they do not satisfy the conditional ,,..
expression in Equation ( 65 ) , the process flow proceeds to step 1216.
[ 0256 ] In step 1215, error Distresid3 between addition coded value Rplusk and addition MDCT coefficient Xplusk is found by means of Equation ( 66 ) , error Distresid3 is added to cumulative error Distresid, and the process flow proceeds to step 1217.
[0257]
Distresid3 = Dresid3l [Equation 66]
~@~@~@ Xpluskl -Mk [0258] Instep1216,therelativepositionalrelationship betweenauditorymaskingcharacteristicvalueMk,addition coded value Rplusk, and addition MDCT coefficient Xplusk corresponds to "Case 4" in FIG.13, and the conditional expression in Equation (67) is satisfied.
[0259]
~Xpluskl <Mk~ and ~Rpluskl >-Mk~ [Equation 67]
[0260] Equation (67) signifiesacaseinwhichtheabsolute value of addition MDCT coefficient Xplusk is less than auditory masking characteristic value Mk, and addition coded value Rplusk is greater than or equal to auditory masking characteristic value Mk. In step 1216, error Distresid4betweenadditioncodedvalueRpluskandaddition MDCT coefficient Xplusk is found by means of Equation (68), error Distresid4 is added to cumulative error Distresid, and the process flow proceeds to step 1217.
[0261]
Distresid4 = Dresid4i 0@0@~@IRpluskl -Mk [Equation 68 [0262] In step 1217, k is incremented by 1.
[0263] In step 1218, N and k are compared, and if k is a smaller value than N, the process flow returns to step 1209. If k is greater than or equal to N, the process flow proceeds to step 1219.
[0264] In step 1219, cumulative error Distresid and minimum error DistresidMIN are compared, and if cumulative error Distresid is a smaller value than minimum error DistresidMZN, the process flow proceeds to step 1220, whereas if cumulative error Distresid is greater than or equal to minimum error DistresidMIN, the process flow proceeds to step 1221.
[0265] In step 1220, cumulative error Distresid is assigned to minimum error DistresidMZN, a is assigned to gainresid_indexMZN, andgainDistresidisassignedtoerror minimum gain DistresidMIN, and the process flow proceeds to step 1221.
[0266] In step 1221, a is incremented by 1.
[0267] In step 1222, total number of vectors Ne and a are compared, and if a is a smaller value than Ne, the process flow returns to step 1202. If a is greater than or equal to Ne, the process flow proceeds to step 1223.

[0268] In step 1223, Nf kinds of residual gain code gainresidf ( f = 0, 1~, Nf-1 ) are read from gain codebook 1109, and quantization residual gain error gainresiderrf ..
( f = 0 , 11, Nf-1 ) is found by means of Equation ( 69 ) for all f~s.
[0269]
gainresiderrf = I GainresidM,N - gainresid r~ I 0 ,A , N f -1~ [ E qu a t i o n 6 9 ]
[0270] Then, in step 1223, f for which quantization residual gain error gainresiderrf ( f = 0, 11, Nf-1 ) is a minimum is found, and the found f is assigned to gainres id_indexMIN .
[0271] In step 1224, gainresid_indexMIN that is the code vector index for which cumulative error Distresid is a minimum, and gainresid_indexMIN found in step 1223, are output to transmission channel 807 as enhancement layer coded information 806, and processing is terminated.
[0272] Next,enhancementlayerdecodingsection810wi11 be described using the block diagram in FIG.14. In the same way as shape codebook 1108, shape codebook 1403 is composed of Ne kinds of N-dimensional code vectors gainresidke (e - 0, 11, Ne-1, k = 0, n, N-1) , and in the same way as gain codebook 1109, gain codebook 1404 is composed of Nf kinds of residual gain codes gainresidf (f - 0, 11, Nf-1) .
[0273] vectordecodingsection1401hasenhancementlayer codedinformation806transmittedviatransmissionchannel 807 as input, and using gainresid_indexMZN and gainresid_indexMIN as the coded information, reads code VeCtOr COderesidkcoderesid_indexMIN_ (k - O, ~, N_1) from shape codebook 1403 , and also reads code gainresidgalnresid-indexMIN
from gain codebook 1404. Then, vector decoding section 1401 multiplies gainresidgainresia-inaexMIN bV
COdereSldkcoderesid-indexMIN (k - O ~ N-]_ ), and outputs gainresidgainresia_inaexMIN , COdereSldkcoderesid-indexMIN (k - O
11, N-1) obtained as a result of the multiplication to a residual quadrature transformation processingsection 1402 as a decoded residual quadrature transformation coefficient.
[0274] The process performed by residual quadrature transformation processing section 1402 will now be described.
[0275] Residual quadrature transformation processing section 1402 has an internal buffer bufresidk', and initializes this buffer in accordancewithEquation (70) .
[0276]
bufresid'k=O~,A,N-l~[Equation 70]
[0277] Decoded residual quadrature transformation COeffiCient gainresidgainresia_inaexMIN . COdereSl.dkcoderesid-_indexMIN (k - 0, 11, N-1 ) output from vector decoding section 1401 is input, and enhancement layerdecodedsignalyresidn 811 is found by means of Equation (71).
[0278]
2 zN-' ~2n + 1 + N~~2k + 1~~
yresid" = N ~ Xresid'k cos 4N ~n = 0, A , N -1~

2F04237~-PCT 52 [Equation 71]
[ 0279 ] Here, Xresidk' is avector linking decoded residual quadrature transformation coefficient gainresidgainresid-_ind~exMIN , COderesidkcoderesid_indexMIN (k- ~, j~, N-1) andbuffer bufresidk', and is found by means of Equation (72).
[0280]
bufresid'k ~k = O,A N -1~
Xresid'k =
gainresid g°anresid _ indesMrN . COdeYBSldk~Nresid-indezM,N lk = N,~ 2N
- 1) [Equation 72]
[0281] Buffer bufresidk' is then updated by means of Equation (73).
[0282]
bufresid'k = gainresid ~u~nresid _indesM,N , COdeY.eSldk~~eresid _indexMIN (k = 0, A N - 1) ['Equation 73 ]
[ 0283 ] Enhancement layer decoded signal yresidn 811 is then output.
[0284] The present invention has no restrictions concerning scalable coding layers , and can also be applied to a case in which vector quantization using an auditory masking characteristic value is performed in an upper layer in a hierarchical voice coding and decoding method with three or more layers.
[0285] In vectorquantizationsection1106,quantization may be performed by appllying acoustic weighting filters todistancecalculationsin above-describedCaselthrough Case 5.
[0286] In this embodiment, a CELP type voice coding and decoding method has been described as the voice coding and decoding method of the base layer coding section and decoding section by way of example, but another voice coding and decoding method may also be used.
[0287] Also, in this embodiment, an example has been given inwhi ch bas a layer coded inf ormat i on and enhancement layer coded information are transmitted separately, but a configuration may also be taken, whereby coded information of each layer is transmitted multiplexed, and demultiplexing is performed on the receiving side to decode the coded information of each layer.
[0288] Thus, in a scalable coding system, also, applying vector quantization that uses an auditory masking characteristic value of the present invention makes it possible to select a suitable code vector that minimizes degradation of a signal that has a large auditory effect, and obtain a high-quality output signal.
[0289] (Embodiment 3) FIG. 15 is a block diagram showing the configuration of a voice signal transmitting apparatus and voice signal receiving apparatus containing the coding apparatus and decoding apparatus described in above Embodiments 1 and 2 according to Embodiment 3 of the present invention.
More specific applications include mobile phones, car navigation systems, and the like.
[0290] In FIG.15, input apparatus 1502 performs A/D
conversion of voice signal 1500 to a digital signal, and outputs this digital signal to voice/musical tone coding apparatus 1503.
Voice/musical tone coding apparatus 1503 is equippedwith ..
,,, ;. .;
voice/musical tone coding apparatus 101 shown in FIG.1, codes a digital signal output from input apparatus 1502, and outputs coded information to RF modulation apparatus 1504. RF modulation apparatus 1504 converts voice coded information output from voice/musical tone coding apparatus 1503 to a signal to be sent on propagation medium such as a radio wave, and outputs the resulting signal to;,transmitting antenna 1505.
Transmitting antenna 1505 sends the output signal output from RF modulation apparatus 1504 as a radio wave (RF
signal ) . RF signal 1506 in the figure represents a radio wave (RF signal) sent from transmitting antenna 1505.
This completes a description of the configuration and operation of a voice signal transmitting apparatus.
[ 0291 ] RF signal 1507 is received by receiving antenna 1508, and is output to RF demodulation apparatus 1509.
RFsigna11507inthefigurerepresentsaradiowavereceived by receiving antenna 1508, and as long as there is no signal attenuation or noise superimposition in the propagation path, is exactly the same as RF signal 1506.
[0292] RF demodulation apparatus1509 demodulatesvoice coded information from the RF signal output from receiving antenna 1508, and outputs the result to voice/musical tone decoding apparatus 1510. Voice/musical tone decoding apparatus 1510 is equipped with voice/musical tone decoding apparatus 105 shown in FIG.1, and decodes a voice signal from voice coded information output from RF demodulation apparatus 1509. Output apparatus 1511 performsD/Aconversionof the decodeddigitalvoicesignal to an analog signal, converts the electrical signal to vibrations of the air, and outputs sound waves audible to the human ear.
[0293] Thus, ahigh-quality output signal can be obtained in both a voice signal transmitting apparatus and a voice signal receiving apparatus.
[0294] ThepresentapplicationisbasedonJapanesePatent Application No.2003-433160 filed on December 26, 2003, the entire content of which is expressly incorporated herein by reference.
Industrial Applicability [0295] Thepresentinvention hasadvantagesofselecting a suitable code vector that minimizes degradation of a signal that has a large auditory effect, and obtaining a high-quality output signal by applying vector quantizationthatusesan auditory maskingcharacteristic value. Also, the present invention is applicable to the fields of packet communication systems typified by Internetcommunications,andmobilecommunicationsystems such as mobile phone and car navigation systems.

FIG.1 FIG.2 SECTION

FIG.3 2F04237~-PCT 62 FIG.4 CRITICAL BANDWIDTH
FIG.5 START

502 READ codek' 503 CALCULATE GAIN Gain FOR ALL ELEMENTS

505 CALCULATE GAIN Ga in FOR ELEMENTS GREATER THAN OR EQUAL

TO AUDITORY MASKING VALUE

509 CASE 1 (NUMBER 25)?

511 CASE 5 (NUMBER 27)?

512 CASE 2 (NUMBER 28)?

514 CASE 3 (NUMBER 33)?

523 CALCULATE gain- indexMZN

524 OUTPUT code_ind exMIN and gain_indexMIN

END

FIG.6 AUDITORY MASKING CHARACTERISTIC VALUE Mk o; INPUT SIGNAL MDCT COEFFICIENT Xk CODED VALUE Rk FIG.7 702 QUADRATURE TRANSFORMATION (PROCESSING SECTION

FIG.8 FIG.9 2F04237~-PCT 64 FIG.10 804, 809 BASE LAYER DECODED SIGNAL

FIG.11 SECTION

START
1202 READ coderesidex 1203 CALCULATE GAIN Gainresid FOR ALL ELEMENTS
1205 CALCULATE GAIN Gainresid FOR ELEMENTS GREATER THAN
OR EQUAL TO AUDITORY MASKING VALUE
1209 CASE 1 (NUMBER 65)?

1211 CASE 5 (NUMBER 67)?

1212 CASE 2 (NUMBER 68)?

1214 CASE 3 (NUMBER 73)?

1223 CALCULATE gainresid_indexMIN
1224 OUTPUT coderesid_indexMIN and gainresid_indexMIN
END

2F04237~-PCT 66 FIG.13 INPUT SIGNAL AUDITORY MASKING CHARACTERISTIC VALUE Mk 0; INPUT SIGNAL MDCT COEFFICIENT Xk CODED VALUE Rk FIG.14 FIG.15 FIG.16 START

PROCESSING

END
FIG. 17 PROCESSING

END

Claims (15)

1. (Deleted)
2. (Deleted)
3. (Deleted)
4. (Deleted)
5. (Deleted)
6. (Deleted)
7. (Deleted)
8. (Deleted)
9. (Deleted)
10. A voice and musical tone coding apparatus comprising:
an quadrature transformation processing section that converts a voice and musical tone signal from a time component to a frequency component;
an auditory masking characteristic value calculation section that finds an auditory masking characteristic value from said voice and musical tone signal; and a vector quantization section that, when one of said voice and musical tone signal frequency component and said code vector is within an auditory masking area indicated by said auditory masking characteristic value, performs vector quantization changing a calculation method of a distance between said voice and musical tone signal frequency component and said code vector based on said auditory masking characteristic value.
11. A voice and musical tone coding apparatus comprising:
a quadrature transformation processing section that converts a voice and musical tone signal from a time component to a frequency component;
an auditory masking characteristic value calculation section that finds an auditory masking characteristic value from said voice and musical tone signal; and a vector quantization section that, when codes of said voice and musical tone signal frequency component and said code vector differ, and codes of said voice and musical tone signal frequency component and said code vector are outside an auditory masking area indicated by said auditory masking characteristic value, performs vector quantization changing a calculation method of a distance between said voice and musical tone signal frequency component and said code vector based on said auditory masking characteristic value.
12. A voice and musical tone coding method comprising:
a quadrature transformation processing step of converting a voice and musical tone signal from a time component to a frequency component;
an auditory masking characteristic value calculation step of finding an auditory masking characteristic value from said voice and musical tone signal; and a vector quantization step of, when one of said voice and musical tone signal frequency component and said code vector is within an auditory masking area indicated by said auditory masking characteristic value, performing vector quantization changing a calculation method of a distance between said voice and musical tone signal frequency component and said code vector based on said auditory masking characteristic value.
13. A voice and musical tone coding method comprising:
a quadrature transformation processing step of converting a voice and musical tone signal from a time component to a frequency component;

an auditory masking characteristic value calculation step of finding an auditory masking characteristic value from said voice and musical tone signal; and a vector quantization step of, when codes of said voice and musical tone signal frequency component and said code vector differ, and codes of said voice and musical tone signal frequency component and said code vector are outside an auditory masking area indicated by said auditory masking characteristic value, performing vector quantization changing a calculation method of a distance between said voice and musical tone signal frequency component and said code vector based on said auditory masking characteristic value.
14. A voice and musical tone coding program that causes a computer to function as:
a quadrature transformation processing section that converts a voice and musical tone signal from a time component to a frequency component;
an auditory masking characteristic value calculation section that finds an auditory masking characteristic value from said voice and musical tone signal; and a vector quantization section that, when one of said voice and musical tone signal frequency component and said code vector is within an auditory masking area indicated by said auditory masking characteristic value, performs vector quantization changing a calculation method of a distance between said voice and musical tone signal frequency component and said code vector based on said auditory masking characteristic value.
15. A voice and musical tone coding program that causes a computer to function as:
a quadrature transformation processing section that converts a voice and musical tone signal from a time component to a frequency component;
an auditory masking characteristic value calculation section that finds an auditory masking characteristic value from said voice and musical tone signal;
and a vector quantization section that, when codes of said voice and musical tone signal frequency component and said code vector differ, and codes of said voice and musical tone signal frequency component and said code vector are outside an auditory masking area indicated by said auditory masking characteristic value, performs vector quantization changing a calculation method of a distance between said voice and musical tone signal frequency component and said code vector based on said auditory masking characteristic value.
CA002551281A 2003-12-26 2004-12-20 Voice/musical sound encoding device and voice/musical sound encoding method Abandoned CA2551281A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2003433160 2003-12-26
JP2003-433160 2003-12-26
PCT/JP2004/019014 WO2005064594A1 (en) 2003-12-26 2004-12-20 Voice/musical sound encoding device and voice/musical sound encoding method

Publications (1)

Publication Number Publication Date
CA2551281A1 true CA2551281A1 (en) 2005-07-14

Family

ID=34736506

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002551281A Abandoned CA2551281A1 (en) 2003-12-26 2004-12-20 Voice/musical sound encoding device and voice/musical sound encoding method

Country Status (7)

Country Link
US (1) US7693707B2 (en)
EP (1) EP1688917A1 (en)
JP (1) JP4603485B2 (en)
KR (1) KR20060131793A (en)
CN (1) CN1898724A (en)
CA (1) CA2551281A1 (en)
WO (1) WO2005064594A1 (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4603485B2 (en) * 2003-12-26 2010-12-22 パナソニック株式会社 Speech / musical sound encoding apparatus and speech / musical sound encoding method
EP1858006B1 (en) * 2005-03-25 2017-01-25 Panasonic Intellectual Property Corporation of America Sound encoding device and sound encoding method
WO2006120931A1 (en) * 2005-05-11 2006-11-16 Matsushita Electric Industrial Co., Ltd. Encoder, decoder, and their methods
CN1889172A (en) * 2005-06-28 2007-01-03 松下电器产业株式会社 Sound sorting system and method capable of increasing and correcting sound class
KR20070046752A (en) * 2005-10-31 2007-05-03 엘지전자 주식회사 Method and apparatus for signal processing
JP4871894B2 (en) * 2007-03-02 2012-02-08 パナソニック株式会社 Encoding device, decoding device, encoding method, and decoding method
WO2008108077A1 (en) * 2007-03-02 2008-09-12 Panasonic Corporation Encoding device and encoding method
CN101350197B (en) * 2007-07-16 2011-05-11 华为技术有限公司 Method for encoding and decoding stereo audio and encoder/decoder
US8527265B2 (en) * 2007-10-22 2013-09-03 Qualcomm Incorporated Low-complexity encoding/decoding of quantized MDCT spectrum in scalable speech and audio codecs
US8515767B2 (en) * 2007-11-04 2013-08-20 Qualcomm Incorporated Technique for encoding/decoding of codebook indices for quantized MDCT spectrum in scalable speech and audio codecs
WO2009110738A2 (en) * 2008-03-03 2009-09-11 엘지전자(주) Method and apparatus for processing audio signal
WO2009110751A2 (en) * 2008-03-04 2009-09-11 Lg Electronics Inc. Method and apparatus for processing an audio signal
CA2759914A1 (en) * 2009-05-29 2010-12-02 Nippon Telegraph And Telephone Corporation Encoding device, decoding device, encoding method, decoding method and program therefor
RU2464649C1 (en) * 2011-06-01 2012-10-20 Корпорация "САМСУНГ ЭЛЕКТРОНИКС Ко., Лтд." Audio signal processing method
JP6160072B2 (en) * 2012-12-06 2017-07-12 富士通株式会社 Audio signal encoding apparatus and method, audio signal transmission system and method, and audio signal decoding apparatus
CN109215670B (en) * 2018-09-21 2021-01-29 西安蜂语信息科技有限公司 Audio data transmission method and device, computer equipment and storage medium

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US80091A (en) * 1868-07-21 keplogley of martinsbukg
US173677A (en) * 1876-02-15 Improvement in fabrics
US44727A (en) * 1864-10-18 Improvement in sleds
US5502789A (en) * 1990-03-07 1996-03-26 Sony Corporation Apparatus for encoding digital data with reduction of perceptible noise
US5323486A (en) * 1990-09-14 1994-06-21 Fujitsu Limited Speech coding system having codebook storing differential vectors between each two adjoining code vectors
KR950010340B1 (en) * 1993-08-25 1995-09-14 대우전자주식회사 Audio signal distortion calculating system using time masking effect
JPH07160297A (en) 1993-12-10 1995-06-23 Nec Corp Voice parameter encoding system
KR970005131B1 (en) * 1994-01-18 1997-04-12 대우전자 주식회사 Digital audio encoding apparatus adaptive to the human audatory characteristic
JPH08123490A (en) 1994-10-24 1996-05-17 Matsushita Electric Ind Co Ltd Spectrum envelope quantizing device
US5864797A (en) * 1995-05-30 1999-01-26 Sanyo Electric Co., Ltd. Pitch-synchronous speech coding by applying multiple analysis to select and align a plurality of types of code vectors
TW321810B (en) * 1995-10-26 1997-12-01 Sony Co Ltd
JP3351746B2 (en) 1997-10-03 2002-12-03 松下電器産業株式会社 Audio signal compression method, audio signal compression device, audio signal compression method, audio signal compression device, speech recognition method, and speech recognition device
DE69836785T2 (en) 1997-10-03 2007-04-26 Matsushita Electric Industrial Co., Ltd., Kadoma Audio signal compression, speech signal compression and speech recognition
CA2684452C (en) 1997-10-22 2014-01-14 Panasonic Corporation Multi-stage vector quantization for speech encoding
KR100304092B1 (en) 1998-03-11 2001-09-26 마츠시타 덴끼 산교 가부시키가이샤 Audio signal coding apparatus, audio signal decoding apparatus, and audio signal coding and decoding apparatus
JP4327420B2 (en) 1998-03-11 2009-09-09 パナソニック株式会社 Audio signal encoding method and audio signal decoding method
JP3515903B2 (en) * 1998-06-16 2004-04-05 松下電器産業株式会社 Dynamic bit allocation method and apparatus for audio coding
US6353808B1 (en) * 1998-10-22 2002-03-05 Sony Corporation Apparatus and method for encoding a signal as well as apparatus and method for decoding a signal
CA2348659C (en) 1999-08-23 2008-08-05 Kazutoshi Yasunaga Apparatus and method for speech coding
JP4438144B2 (en) * 1999-11-11 2010-03-24 ソニー株式会社 Signal classification method and apparatus, descriptor generation method and apparatus, signal search method and apparatus
JP2002268693A (en) * 2001-03-12 2002-09-20 Mitsubishi Electric Corp Audio encoding device
JP2002323199A (en) 2001-04-24 2002-11-08 Matsushita Electric Ind Co Ltd Vaporization device for liquefied petroleum gas
US7027982B2 (en) * 2001-12-14 2006-04-11 Microsoft Corporation Quality and rate control strategy for digital audio
JP2003323199A (en) 2002-04-26 2003-11-14 Matsushita Electric Ind Co Ltd Device and method for encoding, device and method for decoding
US7752052B2 (en) 2002-04-26 2010-07-06 Panasonic Corporation Scalable coder and decoder performing amplitude flattening for error spectrum estimation
CN101615396B (en) 2003-04-30 2012-05-09 松下电器产业株式会社 Voice encoding device and voice decoding device
JP4603485B2 (en) * 2003-12-26 2010-12-22 パナソニック株式会社 Speech / musical sound encoding apparatus and speech / musical sound encoding method

Also Published As

Publication number Publication date
JP4603485B2 (en) 2010-12-22
CN1898724A (en) 2007-01-17
KR20060131793A (en) 2006-12-20
EP1688917A1 (en) 2006-08-09
US7693707B2 (en) 2010-04-06
US20070179780A1 (en) 2007-08-02
WO2005064594A1 (en) 2005-07-14
JPWO2005064594A1 (en) 2007-07-19

Similar Documents

Publication Publication Date Title
US8463602B2 (en) Encoding device, decoding device, and method thereof
US7769584B2 (en) Encoder, decoder, encoding method, and decoding method
US8099275B2 (en) Sound encoder and sound encoding method for generating a second layer decoded signal based on a degree of variation in a first layer decoded signal
US8422569B2 (en) Encoding device, decoding device, and method thereof
EP1203370B1 (en) Method for improving the coding efficiency of an audio signal
US9037454B2 (en) Efficient coding of overcomplete representations of audio using the modulated complex lapped transform (MCLT)
CA2551281A1 (en) Voice/musical sound encoding device and voice/musical sound encoding method
US7844451B2 (en) Spectrum coding/decoding apparatus and method for reducing distortion of two band spectrums
JP2003323199A (en) Device and method for encoding, device and method for decoding
US20090210219A1 (en) Apparatus and method for coding and decoding residual signal
CN101689372A (en) Signal analysis device, signal control device, its system, method, and program
KR100765747B1 (en) Apparatus for scalable speech and audio coding using Tree Structured Vector Quantizer
JP2005114814A (en) Method, device, and program for speech encoding and decoding, and recording medium where same is recorded

Legal Events

Date Code Title Description
FZDE Discontinued
FZDE Discontinued

Effective date: 20101220