CA2256830C - Signal conversion apparatus and method - Google Patents

Signal conversion apparatus and method Download PDF

Info

Publication number
CA2256830C
CA2256830C CA002256830A CA2256830A CA2256830C CA 2256830 C CA2256830 C CA 2256830C CA 002256830 A CA002256830 A CA 002256830A CA 2256830 A CA2256830 A CA 2256830A CA 2256830 C CA2256830 C CA 2256830C
Authority
CA
Canada
Prior art keywords
signals
subject pixel
signal
pixel
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA002256830A
Other languages
French (fr)
Other versions
CA2256830A1 (en
Inventor
Tetsujiro Kondo
Naoki Kobayashi
Hideo Nakaya
Takaya Hoshino
Takeharu Nishikata
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of CA2256830A1 publication Critical patent/CA2256830A1/en
Application granted granted Critical
Publication of CA2256830C publication Critical patent/CA2256830C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/77Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/77Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase
    • H04N9/78Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase for separating the brightness signal or the chrominance signal from the colour television signal, e.g. using comb filter

Abstract

A simplified Y/C separation circuit in which, a plurality of luminance signals are calculated for the subject pixel based on an NTSC signal of the subject pixel and NTSC
signals of pixels that are close to the subject pixel spatially or temporally.
Correlations between the plurality of luminance signals are obtained in a difference circuit and a comparison circuit. In a classification circuit, classification is performed, that is, the subject pixel is classified as belonging to a certain class, based on the correlations between the plurality of luminance signals. Prediction coefficients corresponding to the class of the subject pixel are read out from a prediction coefficients memory section. The RGB
luminance signals of the subject pixel are then determined by calculating prescribed linear first-order formulae.

Description

SIGNAL CONVERSION APPARATUS AND METHOD
BACKGROUND OF THE INVENTION
The present invention relates generally to a si~~nal conversion apparatus and a signal conversion method. More particularly, the present invention relates to a siy~nal conversion apparatus and a signal conversion method for converting a composite video si;~nal into component video signals.
As is well known in the art, an NTSC (national television system committee) television signal is produced by multiplexing a luminance signal (Y) and a chrominance signal (C; having I and Q components) by quadrature modulation. Therefore, to receive a television signal and display a picture, it is necessary to separate a luminance signal and a chrominance signal from the television signal (Y/C separation) and then to convert those signals into component signals such as RGB signals by matrix conversion.
However, in a conventional apparatus performing Y/C separation. for example, a luminance signal and a chrominance signal of a particular subject pixel are determined by performing an operation that includes using composite signals of the subject pixel and pixels in the vicinity of the subject pixel, and predetermined fixed coefficients.
However, if the coefficients are not suitable for the subject pixel, dot interference, cross-color, or the like may occur, and picture quality will be deteriorated.
It would therefore be beneficial to provide an apparatus and method that make it possible to produce pictures in which deterioration in picture quality due to dot interference, cross-color, or the like is reduced.
OBJECTS OF THE INVENTION
Therefore, it is an object of the invention to provide an improved signal conversion apparatus and method.
Another object of the invention is to provide an improved signal conversion apparatus and method for converting a composite video signal into component video signals.
A further object of the invention is to provide an improved signal conversion apparatus and method utilizing a classification adaptive processing system for a subject pixel to determine the various coefficients to be used for converting the subject pixel of a composite signal into component signals.
Yet another object of the invention is to provide an improved signal conversion apparatus and method which through the use of a classification adaptive processing system for a pixel to be converted reduces dot interference, cross-color or the like between various pixels.
A still further object of the invention is to provide an improved si~~nal conversion apparatus and method which utilizes a classification adaptive processing system in order to reduce deterioration of picture quality during conversion from a composite video signal into component video signals, and during subsequent display.
Still other objects and advantages of the invention will in part be obvious and will in part be apparent from the specification and drawings.
SUMMARY OF THE INVENTION
Generally speaking, in accordance with the invention, a signal conversion apparatus and a signal conversion method are provided in which a plurality of luminance signals of a subject pixel are calculated based on a composite signal of the subject pixel and composite signals of pixels that are close to the subject pixel spatially or temporally, and correlations therebetween are determined. Then, classification is performed for classifying the subject pixel in one of a plurality of prescribed classes based on the correlations between the plurality of luminance signals. Component signals of the subject pixel are determined by performing operations by using coefficients corresponding to the class of the subject pixel. Therefore, it becomes possible to obtain a high-quality picture of component signals.
Furthermore, in a learning apparatus and a learning method according to the invention, component signals for learning are converted into a composite signal for learning, and a plurality of luminance signals of a subject pixel are calculated based on a composite signal of the subject pixel and composite signals of pixels that are close to the subject pixel spatially or temporally. Then, correlations between the plurality of luminance signals are determined and classification is performed by determining the class of the subject pixel based on the correlations. Operations are then performed for determining the coefficients that decrease errors with respect to the component signals for learning for each of the classes of component signals that are obtained by performing operations by using the composite signal for learning and the coefficients. Therefore, it becomes possible to obtain coefficients for obtaining a high-quality picture of component signals.
The invention accordingly comprises the several steps and the relationship of one or more of such steps with respect to each of the others, and the apparatus embodying features of construction, combinations of elements and arrangement of parts which are adapted to affect such steps, all as exemplified in the following detailed disclosure, and the scope of the invention will be indicated in the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
For a more complete understanding of the invention, reference is made to the following description and accompanying drawings, in which:
Fig 1 is a block diagram showing an example configuration of a television receiver constructed in accordance with the invention;
Fig. 2 is a block diagram showing an example configuration of a classification adaptive processing circuit of Fig. 1;
Fig. 3A, Fig. 3B and Fig. 3C depict a process performed by a simplified Y/C
separation circuit of Fig. 2;
Fig. 4 depicts a table for performing a process by a classification circuit of Fig. 2;
Fig. 5 depicts an example structure of a field of a digital NTSC signal;
Fig. 6A and Fig. 6B depict a process executed by a prediction taps forming circuit of Fig. 2;
Fig. 7 depicts a flowchart of a process executed by the classification adaptive processing circuit of Fig. 2;
Fig. 8 is a block diagram showing a learning apparatus constructed in accordance with the invention; and Fig. 9 depicts a flowchart of a learning process executed by the learning apparatus of Fig. 8.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
Referring first to Fig. l, an example configuration of an embodiment of a television receiver to which the invention is applied is shown. A tuner 1 detects and demodulates an NTSC television signal that has been received by an antenna (not shown), and supplies a composite video picture signal (hereinafter referred to as an NTSC signal where appropriate) to an A/D converter 2 and an audio signal to an amplifier 5. A/D converter 2 samples, with predetermined timing, the NTSC signal that is supplied from tuner l, and thereby sequentially outputs a standard Y-I signal, a Y-Q signal, a Y+I signal, and a Y+Q signal. The digital NTSC signal (Y-I signal, Y-Q signal, Y+I signal, and Y+Q signal) that is output from A/D converter 2 is supplied to a classification adaptive processing circuit 3.
If the phase of the Y-I signal is, for instance, O°, the phases of the Y-Q signal, Y+I
si~~nal. and Y+Q signal are 90°, 180°, and 270°, respectively.
Classification adaptive processing circuit 3 calculates a plurality of luminance signals for the subject pixel based on a digital NTSC signal of the subject pixel and digital NTSC
signals of pixels that are adjacent to the subject pixel spatially and/or temporally among the received digital NTSC signals, and determines correlations between the plurality of luminance signals. Further, classification adaptive processing circuit 3 classifies the subject pixel by determining to which of a predetermined plurality of classes the subject pixel belongs, based on the correlations between the plurality of luminance signals.
Classification adaptive processing circuit 3 then performs a calculation by using prediction coefficients (described below) corresponding to the determined class of the subject pixel, to thereby determine component signals, for instance, RGB signals, of the subject pixel.
The RGB
signals that have been determined by classification adaptive processing circuit 3 are supplied to a CRT (cathode-ray tube) 4. CRT 4 displays a picture corresponding to the RGB signal supplied from classification adaptive processing circuit 3. Amplifier 5 amplifies an audio signal that is supplied from tuner 1 and supplies an amplified audio signal to a speaker 6.
Speaker 6 outputs the audio signal supplied from amplifier 5.
In a television receiver having the above configuration, when a user selects a particular channel by manipulating a remote commander, or by other means (not shown), tuner 1 detects and demodulates a television signal corresponding to the selected channel, and supplies an NTSC signal (i.e., a picture signal of the demodulated television signal) to A/D
converter 2 and an audio signal thereof to amplifier 4.
A/D converter 2 converts the analog NTSC signal that is supplied from tuner 1 to a digital signal and supplies resulting signals to classification adaptive processing circuit 3.
Classification adaptive processing circuit 3 converts, in the above-described manner, the digital NTSC signal that is supplied from A/D converter 2 into RGB signals.
These RGB
signals are then supplied to and displayed on CRT 4. Amplifier 5 amplifies the audio signal supplied from tuner 1. An amplified audio signal is supplied to and output from speaker 6.
Fig. 2 shows a preferred example configuration of the classification adaptive processing circuit 3 shown in Fig. 1. In Fig. 2, a digital NTSC signal that is input to classification adaptive processing circuit 3 from the A/D converter 2 is supplied to a field memory 11. Field memory 11, which can store digital NTSC signals of at least 3 fields, for example, stores the received NTSC signal under the control of a control circuit 17. Field memory 1 I then reads out stored digital NTSC signals and supplies them to a simplified Y/C
separation circuit 12 and a prediction taps forminU circuit 18. Simplitied Y/C
separation circuit 12 calculates a plurality of luminance signals for a particular prescribed subject pixel based on a digital NTSC signal of the particular subject pixel and digital NTSC signals of pixels that are adjacent the subject pixel spatially and/or temporally among the digital NTSC
signals stored in field memory 1 I .
For example, as shown in Fig. 3A, P, denotes the subject pixel of the subject field and Pz~ and P3A denote pixels located adjacent above and below the subject pixel P,. Simplified Y/C separation circuit 12 determines the luminance of the subject pixel P, that is expressed by a formula Y1 = O.SP, + 0.25PzA + O.2SP3A. As a further example, as shown in Fig. 3B, P, denotes the subject pixel of the subject field and Pea and P3B denote pixels located on the left of and on the right of the subject pixel P1 and adjacent to the respective pixels that are directly adjacent to the subject pixel P,. Simplified Y/C separation circuit 12 determines, as luminance of the subject pixel Pi, a luminance signal Y2 that is expressed by a formula Y2 =
O.SP~ + 0.25PZa + 0.25P3B. Finally, as shown in Fig. 3C, P~ denotes the subject pixel of the subject field and PZ~ denotes a pixel located at the same position as the subject pixel P, in a field that is two fields (one frame) preceding the subject field. Simplified Y/C separation circuit 12 determines, as luminance of the subject pixel P~, a luminance signal Y3 that is expressed by a formula Y3 = O.SP, + p.SpZ~. Thus, simplified Y/C separation circuit 12 determines the above three luminance signals YI through Y3 as luminance signals of the subject pixel and outputs these luminance values to a difference circuit 13.
Difference circuit 13 and a comparison circuit 14 determine correlations between the three luminance signals Y1 through Y3 that are supplied from simplified Y/C
separation circuit 12. That is, for example, difference circuit 13 determines difference absolute values D 1 through D3 that are expressed by the following formulae and supplies these values for D 1 through D3 to comparison circuit 14.
D1= ~Y1-Y2~
D2= ~Y2-Y3~
D3= ~Y3-Y1 Comparison circuit 14 compares the difference absolute values Dl through D3 that are supplied from difference circuit 13 with a predetermined threshold value, and supplies a classification circuit 15 with flags F 1 through F3 representing the results of respective comparisons between the three luminance signals Y1 through Y3. Comparison circuit 14 outputs a plurality of flags F 1 through F3, each fla~ havin~l a value of 1 or 0. The value of each of the flags F 1 throw>h F3 is 1 when the value of the corresponding difference absolute value D 1 through D3 is greater than the predetermined threshold value. The value of each of the flags F 1 through F3 is 0 when the value of the corresponding difference absolute value D 1 through D3 is smaller than or equal to the predetermined tlu-eshold value.
For example, in a preferred embodiment. flag F 1 becomes 1 when Y1 and Y2 have a large difference between them and thus a weak correlation, This indicates that the three vertically arranged pixels, including the subject pixel, that were used in determining Y1 (see Fig. 3A or the three horizontally arranged pixels, including the subject pixel, that were used in determining Y2 (see Fig. 3B) include a signal that causes deterioration of the Y/C
separation. Specifically, for example, flag F 1 becomes 1 when a luminance edge exists in a direction that intersects the vertical or horizontal direction. On the other hand, flag F 1 becomes 0 when Y1 and Y2 have a small difference between them and thus a strong correlation. This indicates that the three vertically arranged pixels including the subject pixel that were used in determining Y1 (see Fig. 3A) and the three horizontally arranged pixels, including the subject pixel, that were used in determining Y2 (see Fig. 3B) do not include a signal that causes deterioration of the Y/C separation.
Flag F2 becomes 1 when Y2 and Y3 have a large difference between them and thus a weak correlation. This indicates that the three horizontally arranged pixels, including the subject pixel, that were used in determining Y2 (see Fig. 3B) or the two temporally arranged pixels that were used in determining Y3 (see Fig. 3C) include a signal that causes deterioration of the Y/C separation. Specifically, for example, flag F2 becomes 1 when a luminance edge exists in a direction that intersects the vertical direction or the subject pixel has a movement. On the other hand, flag F2 becomes 0 when Y2 and Y3 have a small difference between them and thus a strong correlation. This indicates that the three horizontally arranged pixels, including the subject pixel, that were used in determining Y2 (see Fig. 3B) and the two temporally arranged pixels that were used in determining Y3 (see Fig. 3C) do not include a signal that causes deterioration of the Y/C
separation.
A description for flag F3 is omitted because the above description for flag F2 applies to flag F3 except that for Y1 and Y2 the horizontal direction and the vertical direction should be interchanged.
A classification circuit 15 performs classification by classifying the subject pixel as being part of a prescribed class based on flags Fl-F3 that are supplied from comparison circuit 14. Classification circuit 15 supplies, as an address, the class of the determined subject pixel to a prediction coefficients memory section 16. That is. the classification circuit I ~ employs, for instance in a preferred embodiment. one of eight values 0 to 7 as shown in C'i~~. 4 in accordance with flags Fl-F3 that are supplied from comparison circuit 14. This value is then supplied to a prediction coefficients memory section 16 as an address.
Prediction coefficients memory section 16 comprises a Y-I memory 16A, a Y-Q
memory 16B, a Y+I memory 16C, and a Y+Q memory 16D. Each of these memories is supplied with the class of the subject pixel as an address that is output from classification circuit 1 ~ as well as with a CS (chip select) signal that is output from a control circuit 17.
The Y-I memory 16A, Y-Q memory 16B, Y+I memory 16C, and Y+Q memory 16D store, for the respective phases of an NTSC signal, prediction coefficients for the respective classes to be used for converting an NTSC signal of the subject pixel into RGB
signals.
Fig. 5 shows pixels that constitute a particular field of an NTSC signal. In Fig. 5, marks "O" indicate Y-I signals that are signals having a phase 0°, marks "0" indicate Y-Q
signals that are signals having a phase 90°, marks " ~ " indicate Y+I
signals that are signals having a phase 180°, marks "~" indicate Y+Q signals that are signals having a phase 270°.
As shown in Fig. 5, Y-I signals, Y-Q signals, Y+I signals, and Y+Q signals are arranged repeatedly. Y-I signals and Y+I signals are arranged alternately in one column and Y-Q and Y+Q signals are arranged alternately in an adjacent column.
Returning to Fig. 2, Y-I memory 16A, Y-Q memory 16B, Y+I memory 16C, and Y+Q memory 16D (hereinafter collectively referred to as memories 16A-16D where appropriate) store prediction coefficients for the respective classes to be used for converting a Y-I signal, a Y-Q signal, a Y+I signal, and a Y+Q signal into RGB signals.
Prediction coefficients corresponding to the class of the subject pixel that is supplied from classification circuit 15 are read out from the selected memory 16A-16D in accordance with a CS signal from control circuit 17 and supplied to an operation circuit 19. Each of the memories 16A-16D stores, as prediction coefficients for the respective classes, prediction coefficients for R, G, and B to be used for converting an NTSC signal into R, G and B signals.
Control circuit 17 controls read and write operations by field memory 11. That is, control circuit 17 selects the subject field from among a plurality of fields stored in the field memory 11. When processing for a particular subject field has been completed, control circuit 17 instructs the next field to be read from field memory 11 as a new subject field.
Further, control circuit 17 also causes field memory 11 to store a newly supplied field in place of the field that has been provided as the subject field in a first-in, first-out arrangement. Further, control circuit 17 instructs field memory I 1 to provide pixels of the subject field sequentially in line scanning order to simplified Y/C separation circuit 12, and also to provide pixels that are necessary for processing the subject pixel from field memory 1 1 to simplified Y/C separation circuit 12 and to prediction taps forming circuit 18. Control circuit 17 outputs the CS signal for selecting one of the memories 16A-16D
corresponding to the phase of the subject pixel. That is, control circuit 17 supplies the prediction coefficients memory section 16 with CS signals for selecting the Y-I memory 16A, Y-Q memory 16B, Y+I memory 16C, and Y+Q memory 16D when the NTSC signal of the subject pixel is a Y-I
signal, a Y-Q signal, a Y+I signal, and a Y+Q signal, respectively.
Prediction taps forming circuit 18 is supplied with pixels that have been read out from field memory 1 I . Based on these supplied pixels, prediction taps forming circuit I 8 forms prediction taps to be used for converting an NTSC signal of the subject signal into RGB
signals, and supplies the prediction taps to operation circuit 19.
Specifically, for example, when pixel "a" in the subject field shown in Fig. 6A is considered the subject pixel, prediction taps forming circuit 18 employs, as prediction taps, pixels "b"
through "e" in the subject field located above, below, on the left of, and on the right of the subject pixel "a" and adjacent thereto, pixels "f ' through "i" located at top-left, top-right, bottom-left, and bottom-right positions of the subject pixel "a" and adjacent thereto, pixel "j"
located to the left of the subject pixel and adjacent to a pixel "d" that is directly adjacent to the subject pixel "a", pixel "k" located to the right of the subject pixel "e" and adjacent to a pixel that is directly adjacent to the subject pixel "a", and pixels "a"' through "k"' located at the same positions as pixels "a'' through "k" in a field that is two fields preceding the subject field (see Fig. 6B). These prediction taps are forwarded to operation circuit 19.
Operation circuit 19 calculates RGB signals of the subject pixel by using prediction coefficients that are supplied from prediction coefficients memory 16 and prediction taps that are supplied from prediction taps forming circuit I 8. As described above, operation circuit 19 is supplied with sets of prediction coefficients to be used for converting an NTSC signal of the subject pixel into R, G, and B signals (from prediction coefficients memory 16) as well as with prediction taps formed for the subject pixel (from prediction taps forming circuit 18; see Fig. 6), where the pixels constituting the prediction taps are pixels "a"
through "k" and "a"' through "k"' as described above in connection with Fig. 6, the prediction coefficients for R
are WRa through W~; and W~ through WRY;, the prediction coefficients for G are WGa through Woe; and WoA through Woe;, and the prediction coefficients for B are WBa through W«~; and Wa,~ through WEB,;, the operation circuit 19 calculates R, G, and B
si~~nals of the subject pixel accordin~~ to the following linear first-order equations:
IZ = v i'~a, a + vt'nn b + iv,t~ c + ivrra ~t + wK~ c + w~ ~ f + wrr~ ~' + w~l,, h + w,i; i + w,~~ j + w~z~ k + w,ca~l~+w,urb~+wHC.c'+it'und ~+yur~'~+uyru.f ~+yrc g + w,rHh~+~wro~~+WKi>'+w,iKk~
+ lv~tycrl G = w~;,a + w~;hb + w~;~c + w~;~d + w~,.,e + w~.J f +~,K g + W~~Hh + W~_~! + W~~ J + Wok k + w . a'+w . b'+w . .c'+w . d'+w . .e'+w .: f'+w ' c.~ ~e c,c ~,n ~n ~,i c;cg + w~,.H h'+w~;, i'+w~.~ j'+w~;K k' + WGnJJoer B = wH~ a + w,~h b + w,~~ c + wj~, d + wHe a + w,~J f + wjx g + w~H h + w~; i + wH~ j + wRk k + w,~Aa'+w,~Rb'+wjr.c'+w~~~d'+wH,.e'+w,~,, f'+w,3~.g' + WHH h'+WH~ Z'+W p J'+W~3K k' + W~3oJJsm WRoffset> Wcoresec> and WBo~sec are constant terms for correcting a bias difference between an NTSC signal and RGB signals, and are included in the respective sets of prediction coefficients for R, G, and B.
As described above, in operation circuit 19, the process that uses coefficients (prediction coefficients) corresponding to the class of the subject pixel, that is, the process that adaptively uses prediction coefficients corresponding to the property (characteristic) of the subject pixel, is called an adaptive process. The adaptive process will now be briefly described. By way of example, prediction value E[y] of a component signal y of the subject pixel may be determined by using a linear first-order combination model that is prescribed by linear combinations of composite signals (hereinafter referred to as learning data where appropriate) x~, x2, ... of pixels (including the subject pixel) that are adjacent to the subject pixel spatially and/or temporally and predetermined prediction coefficients w,, w~, .... This prediction value E[y] can be expressed by the following equation.
E[y] = w,x, + wZx~ + ... ...... (2) For generalization, a matrix W that is a set of prediction coefficients w, a matrix X
that is a set of learning data, and a matrix Y' that is a set of prediction values E[y] are defined as follows:
xi~ xiz ...x>

x x ...x - ,~ z, za x x . x .
.

,~n "~ .".~
z W, E[y, ]
WZ , EU'z W= ~y W»
(3) The following observation equation holds:
XW = Y'- . . . . .. (4) Prediction values E[y] that are similar to component signals y of subject pixels are determined by applying a least squared method to this observation equation. In this case, a matrix Y that is a set of true component signals y of subject pixels as teacher data and a matrix E that is a set of residuals a of prediction values E[y] with respect to the component signals y are defined as follows:

~'i 3'i z - ' _ ~, _ v .
", ......(J) From equation(4) and (5), the following residual equation holds:
XW=Y+E .......(6) In this case, prediction coefficients w; for determining prediction values E(y] that are similar to the component signals y are determined by minimizing the following squared error:
", ~ a ;_, ......(~) Therefore, prediction coefficients w; that satisfy the following equations (derivatives of the above squared error when the prediction coefficients w; are 0) are optimum values for determining prediction values E[y] similar to the component signals y.
ae, r7ez ae", e, +e2 +...+e", = 0(i =1,2,..., n) aw; aw; aw;
......(8) In view of the above, first, the following equations are obtained by differentiating equation (8) with respect to prediction coefficients w;.
ae; _ ae, _ ae; _ X,~'VW Xrz~..yc'hv xh,~(1=1,2,... yyj) z n ......(9) Equation (10) is obtained from equations (8) and (9).
", ", ", ~C'i_Cil =O.~L'~Xi~ =0~...~~ri.Yin -0 ,=i i=I i-I
......(10) By considering the relationship between the learning data x, the prediction coefficients w. the teacher data y, and the residuals a in the residual equation (8), the following normal equations can be obtained from equation ( 10):
n~ n~ m ,n (~xil'xil)w, +(~xnxiz)WZ +...+.(~xilxi")w" -(~x;lY;) i=I ;_, ;=I i=I
,n n, n, ,n (~xizxil)w, +(~x;zxiz)w2 +...+(~xi~xi")1~, =(~xizYi) i=I i=I i=I i=I
n, n, n, n, ( xu,xil)wl +(~xirrxiz)wz +...+(~x;nxi")N;, =(~xinYi) i=I i=I i=, i=I
......(11) The normal equations (11) can be obtained in the same number as the number of prediction coefficients w to be determined. Therefore, optimum prediction coefficients w can be determined by solving equations ( 11 ) (for equations ( 11 ) to be soluble, the matrix of the coefficients of the prediction coefficients W need to be regular). To solve equations ( 11 ), it is possible to use a sweep-out method (Gauss-Jordan elimination method) or the like.
The adaptive process is a process for determining optimum prediction coefficients w in the above manner and then determining prediction values E[y] that are close to the component signals y according to equation (2) by using the optimum prediction coefficients w (the adaptive process includes a case of determining prediction coefficients w in the advance and determining prediction values by using the prediction coefficients w). The prediction coefficients memory section 16 shown in Fig. 2 stores, for the respective phases of an NTSC signal, prediction coefficients of respective classes for R, G, and B
that are determined by establishing the normal equations ( 11 ) by a learning process described below, and by then solving those normal equations. In this embodiment, as described above, the prediction coefficients include the constant terms WRoffset, WGoffset~ ~d WBo>'rser. These constant terms can be determined by extending the above technique and solvin~>
normal equations ( 1 1 ).
Next, the process executed by the classification adaptive processing circuit 3 shown in Fig. 2 will be described with reference to a flowchart of Fig. 7. After a digital NTSC signal has been stored in field memory I I at step S 1, a particular field is selected as the subject field and a particular pixel in the subject field is selected as the subject pixel by control circuit 17.
Control circuit 17 causes additional pixels (described in connection with Fig.
3) necessary for performing simplified Y/C separation on the subject pixel to be read out from field memory 1 1 and supplied to simplified Y/C separation circuit 12.
At step S2, simplified Y/C separation circuit 12 performs simplified Y/C
separation by using the pixels supplied from field memory 11. Three luminance signals YI-Y3 are determined for the subject pixel in the manner as described above and supplied to difference circuit 13. At step S3, difference circuit 13 supplies difference absolute values D1-D3, based upon the luminance signals Y1-Y3 that are supplied from the simplified Y/C
separation circuit 12 and that are calculated in the manner described above, to comparison circuit 14. At step S4, comparison circuit 14 compares the difference absolute values D 1-D3 that are supplied from difference circuit 13 with respective predetermined threshold values. Flags F1-F3, indicating magnitude relationships with the threshold value as described above, are supplied to classification circuit 15.
At step S5, classification circuit 15 classifies the subject pixel based on flags Fl-F3 that are supplied from comparison circuit 14 in the manner described above in connection with Fig. 4. A resulting class into which the subject pixel is classified is forwarded to prediction coefficients memory section 16 as an address. At this time, control circuit 17 supplies prediction coefficients memory section 16 with CS signals for selecting the Y-I
memory 16A, Y-Q memory 16B, Y+I memory 16C, and Y+Q memory 16D when the NTSC
signal of the subject pixel is a Y-I signal, a Y-Q signal, a Y+I signal, and a Y+Q signal, respectively.
At step S6, respective sets of prediction coefficients for R, G, and B at an address corresponding to the class of the subject pixel that is supplied from the classification circuit 15 are read out from one of the memories 16A-16D that is selected in accordance with the CS signal that is supplied from control circuit 17, and supplied to operation circuit 19.
At step S7, control circuit 17 causes pixels to be read from field memory 11 to prediction taps forming circuit 18 and prediction taps forming circuit 18 forms prediction taps as described above in connection with Fig. 6 for the subject pixel. The prediction taps are supplied to operation circuit 19. Step S7 can be executed parallel with steps S2-S6.
After receiving the prediction coefficients from prediction coefficients memory section 16 and the prediction taps from prediction taps forming circuit 18, at step S8 operation circuit 19 executes the adaptive process as described above.
Specifically, operation circuit 19 determines R, G, and B signals of the subject pixel by calculating linear first-order equations ( 1 ), and outputs those signals.
Then, at step S9, control circuit 17 determines whether the process has been executed for all pixels constituting the subject field that are stored in the field memory. If it is determined at step S9 that the process has not been executed yet for all pixels constituting the subject field, the process returns to step S 1, where one of the pixels constituting the subject field that has not been employed as the subject pixel is utilized as a new subject pixel. Then, step S2 and the following steps are repeated. If it is judged at step S9 that the process has been executed for all pixels constituting the subject field, the process is finished. Steps S1-S9 in the flowchart of Fig. 7 are repeated every time a new field is employed as the subject field.
Fig. 8 shows an example configuration of an embodiment of a learning apparatus for determining prediction coefficients of respective classes for R, G and B
signals to be stored in prediction coefficients memory section 16 shown in Fig. 2. A picture, including a predetermined number of fields of RGB signals for learning (component signals for learning), is supplied to a field memory 21 and stored therein. RGB signals of pixels constituting the picture for learning are read out from field memory 21 under the control of a control circuit 27, and supplied to an RGB/NTSC encoder 22 and to control circuit 27. RGB/NTSC
encoder 22 encodes (converts) the RGB signal of each pixel that is supplied from field memory 21 into a digital NTSC signal. The digital NTSC signal is in turn supplied to a simplified Y/C
separation circuit 23 and to control circuit 27. Simplified Y/C separation circuit 23, a difference circuit 24, a comparison circuit 25, and a classification circuit 26 are configured in the same manner as simplified Y/C separation circuit 12, difference circuit 13, comparison circuit 14, and classification circuit 15 shown in Fig. 2, respectively. A
class code indicative of a class to which the subject pixel belongs is output from classification circuit 15 and is supplied to a learning data memory section 28 as an address.
Control circuit 27 sequentially designates one or more fields stored in field memory 21 as the subject field in line scanning order, for instance, and causes RGB
signal of pixels that are necessary for processing the subject pixel to be additionally read out from field memory 21 and supplied to the RGB/NTSC encoder 22, and to control circuit 27 itself.

Specifically. control circuit 27 causes RGB signals of pixels that are necessary for performing simplified Y/C separation (described above in connection with Fi'l. 3) on the subject pixel to be read out and supplied to RGB/NTSC encoder 22. The RGB signals of the pixels necessary for performing simplified Y/C separation are converted into a digital NTSC
signal by RGB/NTSC encoder 22, and the digital NTSC signal is supplied to simplified Y/C
separation circuit 23. Control circuit 27 also causes RGB signals of the subject pixel and RGB signal of pixels constituting prediction taps for the subject pixel to be read out from field memory 21, and causes the RGB signal of the subject pixel to be supplied to control circuit 27 itself and the RGB signals of the pixels constituting the prediction taps to be supplied to RGB/NTSC
encoder 22. As a result, the RGB signals of the pixels constituting the prediction taps are converted into digital NTSC signals (composite signals for learning) in RGB/NTSC encoder 22, and the digital NTSC signals are supplied to control circuit 27.
Further, when receiving the digital NTSC signals of the pixels constituting the prediction taps from RGB/NTSC encoder 22 in the above manner, control circuit 27 employs the prediction taps of the digital NTSC signal as learning data and employs, as teacher data, the RGB signals of the subject pixel that have been read out from field memory 21. Control circuit 27 collects the learning data and the teacher data and supplies the collected data to learning data memory section 28. That is, the RGB signals of the subject pixel are collected with the digital NTSC signals of the pixels having the positional relationships with the subject pixel as described above in connection with Fig. 6, and the collected data are supplied to learning data memory section 28.
Control circuit 27 then outputs a CS signal for selection one of a Y-I memory 28A, a Y-Q memory 28, a Y+I memory 28C, and a Y+Q memory 28D (described later;
hereinafter collectively referred to as memories 28A-28D where appropriate) that constitute the learning data memory section 28 corresponding to the phase of the subject pixel. That is, control circuit 27 supplies learning data memory section 28 with CS signals for section Y-I memory 28A, Y-Q memory 28B, Y+I memory 28C, and Y+Q memory 28D when the digital NTSC
signal of the subject pixel is a Y-I signal, a Y-Q signal, a Y+I signal, and a Y+Q signal, respectively.
Learning data memory section 28 is composed of Y-I memory 28A, Y-Q memory 28B, Y+I memory 28C, and Y+Q memory 28D, which are supplied with the class of the subject pixel as an address that is output from classification circuit 26 as well as with a CD
signal that is output from control circuit 27. Learning data memory section 28 is supplied with the above-mentioned collection of teacher data and learning data. The collection of teacher data and learning data that is output from control circuit 27 is stored in one of memories 28A-28D selected by CS signal. that is supplied from control circuit 27, at an address corresponding to the class of the subject pixel, the class being output from classification circuit 26.
Therefore, the collections of the RGB signals (teacher data) of the subject pixel and the digital NTSC signals of the pixels constituting the prediction taps for the subject pixel in cases where the digital NTSC signal of the subject pixel is a Y-I signal. a Y-Q signal, a Y+I
signal, and a Y+Q signal are stored in Y-I memory 28A, Y-Q memory 28B, Y+I
memory 28C, and Y+Q memory 28D, respectively. That is, the collection of the teacher data and the learning data is stored in learning data memory section 28 for each phase of the NTSC signal of the subject pixel. Each of the memories 28A-28B is configured so as to be able to store plural pieces information at the same address, whereby plural collections of learning data and teacher data of pixels that are classified in the same class can be stored at the same address.
After the process has been executed by employing, as the subject pixel, all pixels constituting the picture for learning that is stored in field memory, each of operation circuits 29A-29D reads out collections of NTSC signals of pixels constituting prediction taps as learning data and RGB signals as teacher data that are stored at each address of each of memories 28A-28D. Each operation circuit 29A, 29B, 29C, or 29D then calculates, by a least squares method, prediction coefficients that minimize errors between prediction values of RGB signals and the teacher data. That is, each of operation circuits 29A-29D
establishes normal equations ( I 1 ) for each class and each of the R, G, and B signals, and determines prediction coefficients for R, G, and B (R prediction coefficients WRa through WRY;, W~
through WRk, and WRo~set, G prediction coefficients WGa through WGk, Wcn through Woe;, and Woo~set, and B prediction coefficients WBa through WB~;, WBA through WB~;, and WBo~sec) for each class by solving the normal equations.
Since operation circuits 29A-29D execute processes by using data stored in memories 28A-28D, respectively, they generate prediction coefficients for the respective phases of a digital NTSC signal, that is, coefficients for converting a Y-I signal, a Y-Q
signal, a Y+I
signal, and a Y+Q signal into RGB signals, respectively. Each of a Y-I memory 30A, a Y-Q
memory 30B, a Y+I memory 30C, and a Y+Q memory 30D (hereinafter collectively referred to as memories 30A-30D where appropriate) stores sets of prediction coefficients for R, G, and B that have been determined by the operation circuit 29A, 29B, 29C, or 29D
at an address corresponding to each class, to be used for converted a Y-I signal, a Y-Q signal, a Y+I signal, or a Y+Q signal into RGB signals.

Next. a learning process executed in the learnin~ apparatus of Fig. 8 will be described with reference to the flowchart of Fig. 9. After RGB signals of a picture for learning have been stored in field memory 21. at step S 1 1 control circuit 27 selects a certain pixel from the picture for learning as the subject pixel. Then, control circuit 27 also causes the additional pixels necessary for performing simplified Y/C separation on the subject pixel to be read out from field memory 21 and supplied to RGB/NTSC encoder 22. In RGB/NTSC encoder 22, the RGB signals of the respective pixels that are supplied from field memory 21 are converted into digital NTSC signals, which are supplied to simplified Y/C
separation circuit 23.
At step S12, simplified Y/C separation circuit 23 performs simplified Y/C
separation by using the pixels supplied from RGB/NTSC encoder 22, whereby three luminance signals YI-Y3 are determined for the subject pixel in the same manner as described above in connection with Fig. 2, and are then supplied to difference circuit 24.
Thereafter, at steps S 13-S 15, difference circuit 24, comparison circuit 25, and classification circuit 26 executes the same processes as set forth in steps S3-S5 of Fig. 7, whereby a class to which the subject pixel belongs is output from classification circuit 26. The class of the subject pixel is forwarded to learning data memory section 28 as an address.
At step S 16, control circuit 27 supplies the learning data memory section 28 with CS
signals for selecting the Y-I memory 28A, Y-Q memory 28B, Y+I memory 28C, and Y+Q
memory 28D when the digital NTSC signal allocated to the subject pixel is a Y-I signal, a Y-Q signal, a Y+I signal, and a Y+Q signal, respectively. Further, at step S 16, control circuit 27 causes RGB signals of the subject pixel and RGB signals of pixels constituting prediction taps for the subject pixel to be read out from field memory 21. The RGB
signals of the subject pixel are then supplied to control circuit 27 itself and the RGB
signals of the pixels constituting the prediction taps are supplied to RGB/NTSC encoder 22. In this case, RGB/NTSC encoder 22 converts the RGB signals of the pixels constituting the prediction taps into digital NTSC signals, which are also supplied to the control circuit 27.
Then, control circuit 27 employs, as learning data, the digital NTSC signals of the pixels constituting the prediction taps that are supplied from RGB/NTSC
encoder 22, and employs, as teacher data, the RGB signals of the subject pixel that are supplied from field memory 2I . Control circuit 27 collects the learning data and the teacher data and supplies the collected data to learning data memory section 28. Step S 16 can be executed parallel with steps S 12-S 15. At step S 17, the collection of the teacher data and the learning data that is output from control circuit 27 is stored in one of memories 28A-28D at an address corresponding to the class of the subject pixel that is output from classification circuit 26.
The particular memory used for storage is selected by the CS signal that is supplied from control circuit 27.
Then, at step S 18, control circuit 27 determines whether the process has been executed for all pixels constituting the picture for learning that is stored in field memory 21.
If it is determined at step S 18 that the process has not been executed for all pixels constituting the picture for learning, the process returns to step S 1 1, where a pixel that has not yet been the subject pixel is employed as a new subject pixel. Then, step S12 and the following steps are repeated.
If it is determined at step S 18 that the process has been executed for all pixels constituting the picture for learning, the process proceeds to step S 19. At step S I 9, each of the operation circuits 29A-29D reads out collections of learning data and teacher data at each address from the memory 28A, 28B, 28C or 28D, and normal equations ( 11 ) are established for each of R, G, and B. Further, the established normal equations are also solved at step S 19, whereby sets of prediction coefficients to be used for converting a Y-I
signal, a Y-Q
signal, a Y+I signal, or a Y+Q signal into RGB signals are determined for each class. The sets of prediction coefficients of the respective classes corresponding to a Y-I signal, a Y-Q
signal, a Y+I signal, and a Y+Q signal are supplied to and stored in respective memories 30A-30D. The learning process is then completed. The sets of prediction coefficients for R, G and B stored in memories 30A-30D are then stored for each class in the respective memories 16A-16D shown in Fig. 2.
In the above learning process, there may occur a class for which a necessary number of normal equations for determining prediction coefficients are not obtained.
For such a class, for example, prediction coefficients that are obtained by establishing normal equations, after disregarding particular classes, and solving those normal equations may be employed as default prediction coefficients.
As described above, the subject pixel is classified based on correlations between a plurality of luminance signals that are determined for the subject pixel, and a digital NTSC
signal of the subject pixel. The subject pixel is converted into RGB signals by using prediction coefficients corresponding to a class obtained from the prediction coefficients suitable for the subject pixel. Therefore, in particular, the frequency of occurrence of dot interference due to a luminance edge and cross-color, that is, a luminance-dependent variation in color, can be reduced.

In the above embodiments, since an NTSC signal is directly converted into RGB
si~~nals (prediction coefficients for such a conversion are determined by learning). the scale of the apparatus can be made smaller than in conventional cases where RGB signals are determined by Y/C-separating an NTSC signal and matrix-converting resulting YIQ si~~nals.
That is, for example, where RGB signals are determined by Y/C-separating an NTSC signal and matrix-converting resulting YIQ signals, both of a chip for the Y/C
separation and a chip for the matrix conversion are needed. In contrast, the classification adaptive processing circuit 3 shown in Fig. 2 can be constructed in the form of one chip.
Although in the above embodiments an NTSC signal is converted into RGB signals I 0 by calculating linear first-order formulae of the NTSC signal and prediction coefficients, the NTSC signal can be converted into RGB signals by other methods, for example, by calculating nonlinear operation formulae.
Although in the above embodiments simplified Y/C separation is performed by using pixels that are arranged in three directions, that is, arranged horizontally or vertically, or 15 located at the same positions and arranged temporally, other methods can be used. For example, it is possible to perform simplified Y/C separation by using pixels that are spatially arranged in oblique directions or pixels that are located at different positions and arranged temporally and then determine luminance signals of the subject pixel. Further, operation formulae that are used in the simplified Y/C separation are not limited to those described 20 above.
Although in the above embodiments prediction taps are formed by pixels as described in connection with Fig. 6, these prediction taps may be formed by other pixels.
Although in the above embodiments the adaptive process and the learning process are executed for each phase of an NTSC signal, they can be executed irrespective of the phases 25 of an NTSC signal. However, more accurate RGB signals and prediction coefficients can be obtained by executing the adaptive process and the learning process for each phase of an NTSC signal.
Although in the above embodiments an NTSC signal is converted into RGB signals (signals of three primary colors), other conversions are also possible. For example, it is ' 30 possible to convert a signal based upon a PAL method or the like into RGB
signal, or to convert an NTSC signal into YUV signals (a luminance signal Y and color difference signals U and V) or YIQ signals. That is, no particular limitation is imposed on the composite signal before conversion and the component signals after conversion.

Although in the above embodiments fla~~s representin~> magnitude relationships between a predetermined threshold value and difference absolute values between a plurality of luminance signals determined for the subject pixel are used as their correlation values, other physical quantities may be used.
Although the above embodiments are directed to a field-by-field process. other kinds of process are possible, such as a frame-by-frame process.
The invention can also be applied to other picture-handling apparatuses other than a television receiver, for instance, a VTR (video tape recorder), a VDR (video disc recorder), or the like. Further, the invention can be applied to both a moving picture and a still picture.
Although in the above embodiments a Y-I signal, a Y-Q signal, a Y+I signal, and a Y+Q signal are obtained by sampling an NTSC signal, the sampling of an NTSC
signal may be performed with any timing as long as signals of the same phase are obtained every four . sampling operations. However, in the latter case, it is necessary to use signals of the same phases also in the learning.
The invention can be performed by a computer program used in a general computer as well as hardware.
As described above, in the signal conversion apparatus and the signal conversion method according to the invention, a plurality of luminance signals of a subject pixel are calculated based on a composite signal of the subject pixel and composite signals of pixels that are close to the subject pixel spatially or temporally, and correlations there between are determined. Then, classification is performed for classifying the subject pixel as one of prescribed classes based on the correlations between the plurality of luminance signals, and component signals of the subject pixel are determined by performing operations by using coefficients corresponding to the class of the subject pixel. Therefore, it becomes possible to obtain a high-quality picture of component signals.
In the learning apparatus and the learning method according to the invention, component signals for learning are converted into a composite signal for learning, and a plurality of luminance signals of a subject pixel are calculated based on a composite signal of the subject pixel and composite signals of pixels that are close to the subject pixel spatially or temporally. Then, correlations between the plurality of luminance signals are determined and classification is performed by determining the class of the subject pixel based on the correlations. Operations are then performed for determining, for each of the classes, the coefficients that decrease errors, with respect to the component signals for learning, of component signals that are obtained by performing operations by using the composite signal for learning and the coefficients. Therefore, it becomes possible to obtain coefficients for obtaining a high-quality picture of component signals.
It will thus be seen that the objects set forth above, among= those made apparent from the preceding description, are efficiently attained and, since certain changes may be made in carrying out the above method and in the constructions set forth without departing from the spirit and scope of the invention, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.
It is also to be understood that the following claims are intended to cover all of the generic and specific features of the invention herein described, and all statements of the scope of the invention which, as a matter of language, might be said to fall there between.

Claims (27)

1. A method for converting a composite signal into component signals comprising the steps of:
calculating a number of luminance signals corresponding to a subject pixel based on a composite signal corresponding to the subject pixel and composite signals corresponding to at least one pixel spatially or temporally adjacent to the subject pixel;
determining a correlation among the number of luminance signals;
classifying the subject pixel as belonging to one of a predetermined number of classes based upon the determined correlation;
generating a class information corresponding to at least one group of predictive coefficients based on the classification of the subject pixel; and producing component signals for the subject pixel based on the at least one group of predictive coefficients corresponding to the class information and at least one composite signal corresponding to the at least one pixel adjacent to the subject pixel.
2. The method according to claim 1, wherein the at least one group of predictive coefficients is read out from a memory based on the class information, the at least one group of predictive coefficients for each of said respective predetermined number of classes being stored in the memory.
3. The method according to claim 2, wherein each of the at least one group of predictive coefficients corresponding to each of said plurality of predetermined number of classes includes predictive coefficients for each component signal.
4. The method according to claim 2, wherein the at least one group of predictive coefficients corresponding to each of said plurality of predetermined number of classes is stored for each phase of the composite signal.
5. The method according to claim 2, wherein each of the at least one group of predictive coefficients corresponding to each of said plurality of predetermined number of classes is generated based on component signals utilized in advance for learning.
6. The method according to claim 1, wherein the component signals are a luminance signal and color difference signals.
7. The method according to claim 1, wherein the component signals are three primary color signals.
8. The method according to claim 1, further comprising the step of determining the correlation among the number of luminance signals based on a magnitude relationship between a threshold value and a difference between the number of luminance signals.
9. An apparatus for converting a composite signal into component signals comprising:
calculating means for calculating a number of luminance signals corresponding to a subject pixel based on a composite signal corresponding to the subject pixel and composite signals corresponding to at least one pixel spatially or temporally adjacent to the subject pixel;
determination means for determining a correlation among the number of luminance signals;
classification means for classifying the subject pixel as belonging to one of a predetermined number of classes based upon the determined correlation and for generating a class information corresponding to at least one group of predictive coefficients based on the classification of the subject pixel; and producing means for producing component signals for the subject pixel based on the at least one group of predictive coefficients corresponding to the class information and at least one composite signal corresponding to the at least one pixel adjacent to the subject pixel.
10. The apparatus according to claim 9, wherein the producing means includes memory for storing the at least one group of predictive coefficients for each of said respective predetermined number of classes, the at least one group of predictive coefficients being read from said memory based on the respective class information.
11. The apparatus according to claim 10, wherein each of the at least one group of predictive coefficients corresponding to each of said plurality of predetermined number of classes includes predictive coefficients for each component signal.
12. The apparatus according to claim 10, wherein the memory stores the at least one group of predictive coefficients corresponding to each of said plurality of predetermined number of classes for each phase of the composite signal.
13. The apparatus according to claim 10, wherein each of the at least one group of predictive coefficients corresponding to each of said plurality of predetermined number of classes is generated based on component signals utilized in advance for learning.
14. The apparatus according to claim 9, wherein the component signals are a luminance signal and color difference signals.
15. The apparatus according to claim 9, wherein the component signals are three primary color signals.
16. The apparatus according to claim 9, wherein said determination means determines the correlation among the number of luminance signals based on a magnitude relationship between a threshold value and a difference between the number of luminance signals.
17. An apparatus for converting a composite signal into component signals comprising:
a signal receiver;
a calculator coupled with said signal receiver and adapted to receive a pixel information therefrom;
a determiner coupled with said calculator and adapted to receive information therefrom;
a classifier coupled with said signal receiver and adapted to receive said pixel information therefrom; and a component signal producer coupled with the classifier and the signal receiver and adapted to receive information therefrom;
whereby the calculator calculates a number of luminance signals corresponding to a subject pixel based on a composite signal corresponding to the subject pixel received from the signal receiver and composite signals received from the signal receiver corresponding to at least one pixel spatially or temporally adjacent to the subject pixel, the determiner determines a correlation among the number of luminance signals based upon information received from the calculator and the classifier classifies the subject pixel received from the signal generator as belonging to one of a predetermined number of classes based upon the determined correlation by the determiner and generates a class information corresponding to at least one group of predictive coefficients based on the classification of the subject pixel; and whereby the component signal producer produces component signals for the subject pixel based on the at least one group of predictive coefficients corresponding to the class information received from the class information generator and at least one composite signal corresponding to the at least one pixel adjacent to the subject pixel received from the signal receiver.
18. The apparatus according to claim 17, further comprising:
a memory coupled with the classifier;
whereby the at least one group of predictive coefficients for each of said respective predetermined number of classes being stored in the memory and being read from the memory based on the class information.
19. The apparatus according to claim 18, wherein each of the at least one group of predictive coefficients corresponding to each of said plurality of predetermined number of classes includes predictive coefficients for each component signal.
20. The apparatus according to claim 18, wherein the memory stores the at least one group of predictive coefficients corresponding to each of said plurality of predetermined number of classes for each phase of the composite signal.
21. The apparatus according to claim 18, wherein each of the at least one group of predictive coefficients corresponding to each of said plurality of predetermined number of classes is generated based on component signals utilized in advance for learning.
22. The apparatus according to claim 17, wherein the component signals are a luminance signal and color difference signals.
23. The apparatus according to claim 17, wherein the component signals are three primary color signals.
24. The apparatus according to claim 17, wherein said determiner determines the correlation among the number of luminance signals based on a magnitude relationship between a threshold value and a difference between the number of luminance signals.
25. An apparatus for converting a composite signal into component signals, comprising:

separating means for separating a number of luminance signals, corresponding to a subject pixel, from a composite signal corresponding to the subject pixel and composite signals corresponding to at least one pixel spatially or temporally adjacent to the subject pixel;
26 classification means for classifying the subject pixel as belonging to one of a predetermined number of classes based upon the luminance signals separated at said separating means and for generating a class information corresponding to at least one group of predictive coefficients based on the classification of the subject pixel; and producing means for producing component signals for the subject pixel based on the at least one group of predictive coefficients corresponding to the class information and at least one composite signal corresponding to the at least one pixel adjacent to the subject pixel.

26. A method for converting a composite signal into component signals, comprising the steps of:

separating a number of luminance signals, corresponding to a subject pixel, from a composite signal corresponding to the subject pixel and composite signals corresponding to at least one pixel spatially or temporally adjacent to the subject pixel;

classifying the subject pixel as belonging to one of a predetermined number of classes based upon the separated luminance signals;

generating a class information corresponding tout least one group of predictive coefficients based upon the classification of the subject pixel;
and producing component signals for the subject pixel based upon the at least one group of predictive coefficients corresponding to the class information and at least one composite signal corresponding to the at least one pixel adjacent to the subject pixel.
27
CA002256830A 1997-12-25 1998-12-18 Signal conversion apparatus and method Expired - Fee Related CA2256830C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP35762197 1997-12-25
JPPO9-357621 1997-12-25

Publications (2)

Publication Number Publication Date
CA2256830A1 CA2256830A1 (en) 1999-06-25
CA2256830C true CA2256830C (en) 2007-04-03

Family

ID=18455065

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002256830A Expired - Fee Related CA2256830C (en) 1997-12-25 1998-12-18 Signal conversion apparatus and method

Country Status (7)

Country Link
US (1) US6297855B1 (en)
EP (1) EP0926902B1 (en)
KR (1) KR100591021B1 (en)
CN (1) CN1178527C (en)
AU (1) AU746276B2 (en)
CA (1) CA2256830C (en)
DE (1) DE69835871T2 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3777599B2 (en) * 2002-04-23 2006-05-24 ソニー株式会社 Image information conversion apparatus and method, coefficient calculation apparatus and method, coefficient data and coefficient data storage apparatus, image quality degradation point detection apparatus and method, recording medium, and program
JP4006628B2 (en) * 2002-07-03 2007-11-14 ソニー株式会社 Information processing apparatus, information processing method, recording medium, and program
JP4175124B2 (en) * 2003-01-24 2008-11-05 ソニー株式会社 Image signal processing device
DE10327083A1 (en) * 2003-02-11 2004-08-19 Giesecke & Devrient Gmbh Security paper, for the production of bank notes, passports and identity papers, comprises a flat substrate covered with a dirt-repellent protective layer comprising at least two lacquer layers
US20040179141A1 (en) * 2003-03-10 2004-09-16 Topper Robert J. Method, apparatus, and system for reducing cross-color distortion in a composite video signal decoder
JP4265291B2 (en) * 2003-06-06 2009-05-20 ソニー株式会社 Information signal processing apparatus and method, and program for executing information signal processing method
KR100580552B1 (en) * 2003-11-17 2006-05-16 엘지.필립스 엘시디 주식회사 Method and Apparatus for Driving Liquid Crystal Display Device
JP4843367B2 (en) * 2006-04-28 2011-12-21 株式会社東芝 Y / C separation circuit
CN101146234B (en) * 2006-09-12 2010-12-01 中兴通讯股份有限公司 Stream media image processing method
JP4895834B2 (en) * 2007-01-23 2012-03-14 Hoya株式会社 Image processing device
KR20150027951A (en) * 2013-09-05 2015-03-13 삼성디스플레이 주식회사 Method of driving light-source and display apparatus for performing the method

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2830111B2 (en) 1989-07-21 1998-12-02 ソニー株式会社 High efficiency coding device
US5124688A (en) * 1990-05-07 1992-06-23 Mass Microsystems Method and apparatus for converting digital YUV video signals to RGB video signals
JPH0447793A (en) * 1990-06-15 1992-02-17 Toshiba Corp Luminance/chrominance separator circuit
JPH0591532A (en) * 1991-09-30 1993-04-09 Toshiba Corp Image filter and adaptive type image filter learning method
KR0130963B1 (en) * 1992-06-09 1998-04-14 구자홍 Method for manufacturing field effect transistor
JPH06125567A (en) * 1992-10-13 1994-05-06 Matsushita Electric Ind Co Ltd Brightness/chrominance separating device
KR100360206B1 (en) * 1992-12-10 2003-02-11 소니 가부시끼 가이샤 Image signal converter
KR100311295B1 (en) 1993-08-30 2001-12-17 이데이 노부유끼 Image processing apparatus and method
JP3387170B2 (en) * 1993-09-28 2003-03-17 ソニー株式会社 Adaptive Y / C separation apparatus and method
KR0126658B1 (en) * 1993-10-05 1997-12-29 구자홍 The sample rate conversion device for signal processing of non-standard tv.
JPH07212794A (en) * 1994-01-12 1995-08-11 Sony Corp Signal separator
JP3632987B2 (en) * 1994-03-08 2005-03-30 ソニー株式会社 Processing apparatus and method
JP3387203B2 (en) * 1994-04-15 2003-03-17 ソニー株式会社 Color video signal correlation detection device, Y / C separation device, learning device, and methods thereof
US5821919A (en) * 1994-04-29 1998-10-13 Intel Corporation Apparatus for table-driven conversion of pixels from YVU to RGB format
KR970007809A (en) * 1995-07-14 1997-02-21 이형도 Head base of video tape recorder
US5831687A (en) * 1995-09-05 1998-11-03 Ricoh Company, Ltd. Color video signal processing method and apparatus for converting digital color difference component signals into digital RGB component signals by a digital conversion
JP3787650B2 (en) 1995-09-08 2006-06-21 ソニー株式会社 Digital image signal encoding apparatus and method, encoded image signal decoding apparatus and method
KR0157566B1 (en) * 1995-09-30 1998-11-16 김광호 Interpolation method and apparatus for hdtv
JP3435961B2 (en) * 1996-02-16 2003-08-11 ヤマハ株式会社 Image data conversion apparatus and image data conversion method
JPH1013856A (en) 1996-06-19 1998-01-16 Sony Corp Television broadcast signal processor and its method
JP3772408B2 (en) 1996-08-22 2006-05-10 ソニー株式会社 Image signal conversion apparatus and method
JP3695006B2 (en) 1996-09-06 2005-09-14 ソニー株式会社 Signal processing apparatus and method
JPH10150674A (en) 1996-11-19 1998-06-02 Sony Corp Receiver and its method

Also Published As

Publication number Publication date
KR100591021B1 (en) 2006-11-30
EP0926902A2 (en) 1999-06-30
KR19990063534A (en) 1999-07-26
EP0926902A3 (en) 1999-07-28
US6297855B1 (en) 2001-10-02
DE69835871D1 (en) 2006-10-26
EP0926902B1 (en) 2006-09-13
CN1236269A (en) 1999-11-24
AU9816098A (en) 1999-07-15
DE69835871T2 (en) 2007-04-05
AU746276B2 (en) 2002-04-18
CA2256830A1 (en) 1999-06-25
CN1178527C (en) 2004-12-01

Similar Documents

Publication Publication Date Title
US5221966A (en) Video signal production from cinefilm originated material
CA2256830C (en) Signal conversion apparatus and method
CN1984237A (en) Scene change detector and method thereof
EP0351787B1 (en) Video signal processing circuit
JPH10313445A (en) Image signal converter, television receiver using the same, and generating device and method for coefficient data used therefor
CA2023390A1 (en) Interstitial line generator
GB2240232A (en) Converting field rate of telecine signal
JP3864444B2 (en) Image signal processing apparatus and method
US6160917A (en) Method of calculating motion vectors
JP3767692B2 (en) Signal processing apparatus and method, recording medium, and program
US5442409A (en) Motion vector generation using interleaved subsets of correlation surface values
JPS60153682A (en) Detection system of movement in high-definition tv subsample transmission system
JP3777831B2 (en) Image information conversion apparatus and conversion method
JP2907663B2 (en) Motion vector detection method
JP2602213B2 (en) TV receiver
JP2000138949A (en) Image information conversion device and conversion method
JP4062714B2 (en) Video signal conversion apparatus and method
JPS634781A (en) Action signal detecting circuit in digital television receiver
JP4597282B2 (en) Image information conversion apparatus, conversion method, and display apparatus
JP2000152273A (en) Image information converter and converting method
JP4061632B2 (en) Image signal processing apparatus and method, learning apparatus and method, and recording medium
JP4752130B2 (en) Image processing apparatus and method, recording medium, and program
JP2623335B2 (en) Television signal receiving device
JPH11243559A (en) Signal converter, its method, leaning apparatus and its method
JPH07212794A (en) Signal separator

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed

Effective date: 20141218