WO1997001937A1 - Non-linear interpolation for color video line conversion - Google Patents

Non-linear interpolation for color video line conversion Download PDF

Info

Publication number
WO1997001937A1
WO1997001937A1 PCT/US1996/009409 US9609409W WO9701937A1 WO 1997001937 A1 WO1997001937 A1 WO 1997001937A1 US 9609409 W US9609409 W US 9609409W WO 9701937 A1 WO9701937 A1 WO 9701937A1
Authority
WO
WIPO (PCT)
Prior art keywords
signals
video
signal
memory
color space
Prior art date
Application number
PCT/US1996/009409
Other languages
French (fr)
Inventor
Shao Wei Pan
Shay-Ping Thomas Wang
Original Assignee
Motorola Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Motorola Inc. filed Critical Motorola Inc.
Priority to AU60988/96A priority Critical patent/AU6098896A/en
Publication of WO1997001937A1 publication Critical patent/WO1997001937A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0117Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving conversion of the spatial resolution of the incoming video signal
    • H04N7/012Conversion between an interlaced and a progressive signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0125Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level one of the standards being a high definition standard
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection

Definitions

  • the present invention is related to the following inventions which are assigned to the same assignee as the present invention:
  • the present invention relates generally to video systems and, in particular, to a video system which converters analog video signals into digital video signals .
  • the first color television system was developed in the United States, and in December 1953 the Federal Communications Commission (FCC) approved the transmission standard. Most of the work for developing a color transmission standard was done by the National Television System Committee (NTSC) .
  • NTSC National Television System Committee
  • the NTSC standard provides a format for broadcasting a video signal having * 525 scan lines (485 representing pixels) , 60 fields/second, and 2:1 interlacing. Television broadcasts in the United States, Japan, and many other countries currently adhere to the NTSC standard.
  • NTSC Phase Alternation Line
  • SECAM Phase Alternation Line
  • the three standards, NTSC, PAL, and SECAM, provide analog video sources having an interlaced display format, i.e., each frame of video is scanned out as two fields that are separated temporally and offset spatially in the vertical direction.
  • FIG. 1 illustrates the temporal separation of the fields in an interlaced video signal.
  • the video signal consists of a sequence of alternating even and odd fields separated by a period of time for synchronizing the fields.
  • FIG. 2 shows an example of a raster displaying an odd field.
  • FIG. 3 shows an example of a waveform of an NTSC composite video signal. The waveform shown represents two scan lines. The waveform includes a horizontal sync pulse 50 and a color burst 52 for each scan line.
  • HDTV High- Definition Television
  • FIG. 1 illustrates a temporal block diagram of an interlaced video signal.
  • FIG. 2 illustrates a graphical representation of a raster represented by the interlaced video signal of FIG. 1.
  • FIG. 3 illustrates a graphical representation of a waveform segment of an NTSC video signal.
  • FIG. 4 illustrates a block diagram representation of a video system in accordance with one embodiment of the present invention.
  • FIG. 5 illustrates a block diagram representation of a video system in accordance with a preferred embodiment of the present invention.
  • FIG. 6 illustrates a block diagram representation of a video system in accordance with another embodiment of the present invention.
  • FIG. 7 illustrates a block diagram representation of a video system in accordance with a further embodiment of the present invention.
  • FIG. 8 conceptually illustrates non-uniform interpolation performed in accordance with an embodiment of the present invention.
  • FIG. 9 illustrates a graphical representation of linear, non-uniform interpolation performed in accordance with an embodiment of the present invention.
  • FIG. 10 illustrates a graphical representation of non-linear, non-uniform interpolation performed in accordance with an embodiment of the present invention.
  • FIG. 11 illustrates a flow diagram of a method of using the video systems shown in FIGS. 4-7.
  • FIG. 12 is a detailed block diagram of the converter shown in FIGS. 4-7.
  • FIG. 13 is a detailed block diagram of the enhanced-video circuit shown in FIGS. 4-7.
  • FIG. 14 is a detailed block diagram of an alternative version of the enhanced-video circuit shown in FIGS. 4-7 in accordance with one embodiment of the present invention.
  • FIG. 15 is a flow diagram of a method of using the enhanced-video circuits shown in FIGS. 13-14.
  • FIG. 16 illustrates a flow diagram of a method of processing an interlaced video signal to generate a high-resolution video signal.
  • FIG. 17 illustrates a flow diagram of a method of processing an interlaced color video signal to generate a high-resolution video signal in accordance with one embodiment of the present invention.
  • FIG. 18 illustrates a contextual diagram of a broadcasting system which employs at least one of the video systems depicted in FIGS. 4-7.
  • FIG. 19 illustrates a contextual diagram of a cable broadcasting system which employs at least one of the video systems depicted in FIGS. 4-7.
  • the video system includes a converter 70, a memory 72, an enhanced-video circuit 74, a monitor 76, and a sync generator 78.
  • the converter 70 provides a means for converting a video signal 80 to a plurality of color space signals.
  • the format of the video signal 80 can be based on conventional standards for television transmission, such as NTSC, PAL, or SECAM.
  • the color space signals can be construed as binary words that represent values in a given color space, such as a YIQ, YUV, or RGB color space.
  • the color space signals are passed from the converter 70 to the memory 72.
  • the memory 72 stores the plurality of color space signals corresponding to an input frame, and, in turn, provides the color space signals as output.
  • the enhanced-video circuit 74 receives color space signals from the memory 72 and performs non ⁇ uniform interpolation between adjacent color space signals. As a result of performing the non-uniform interpolation, the enhanced-video circuit 74 generates a plurality of interpolated pixel signals which represent an output frame having a greater number of vertical scan lines than the input frame.
  • the interpolated pixel signals can be construed as binary words representing values in the same color space as the color space signals.
  • the sync generator 78 generates a sync trigger signal 86, a sampling signal 82, and a field sync signal 92. All of these signals are generated from the video signal 80.
  • the sync trigger signal 86 is distributed to the memory 72 and the enhanced-video circuit 74 for coordinating the transfer of the color space signals from the memory 72 to the enhanced-video circuit 74.
  • the sampling signal 82 synchronizes the operations of the converter 70 and the memory 72. If the video signal 80 is an NTSC signal, the frequencies of the sampling signal 82, the sync trigger signal 86, and the field sync signal 92 are approximately 12.27 MHz, 24.54 MHz, and 60 Hz, respectively.
  • the monitor 76 displays an image represented by the interpolated pixel signals that it receives from the enhanced-video circuit 74.
  • the monitor 76 is any means for receiving and displaying a visual image represented by an electronic signal.
  • the monitor 76 could include a consumer TV, a projection TV, a computer monitor, or a liquid crystal display (LCD) .
  • FIG. 5 illustrates a block diagram of a video system in accordance with a preferred embodiment of the present invention.
  • the video system includes the converter 70, memory 72, sync generator 78, and monitor 76 shown in FIG. 4.
  • the preferred video system includes a PLL 104 (phase locked loop) and an enhanced-video circuit 100 which allow the video system to vary the number of vertical scan lines in the output frame.
  • PLL 104 phase locked loop
  • the PLL 104 generates at least one high-band sync signal from the field sync signal 92.
  • the PLL 104 can be either an analog or digital PLL.
  • the PLL 104 provides the high-band sync signal to the enhanced-video circuit 100.
  • the high-band sync signal is used to transfer interpolated pixel signals from the enhanced video circuit 100.
  • FIG. 6 illustrates a block diagram of a video system in accordance with another embodiment of the present invention. Like the video system in FIG. 4, the video system of FIG. 6 includes the converter 70, the memory 72, the enhanced-video circuit 74, the sync generator 78, and the monitor 76. However, in addition to these elements, the video system of FIG.
  • the 6 includes a color space converter 124 for converting the interpolated pixel signals from the enhanced-video circuit 74 into a plurality of output format signals.
  • the output format signals are passed to the monitor 76 which in response displays an image represented by the signals. Examples of possible output format signals are RGB signals and YCrCb signals.
  • the color space converter 124 is useful when the color spaces of the color space signals and the monitor 76 are different. For example, the converter 70 may generate as output a plurality of YUV signals, whereas the monitor 76 responses to RGB signals. In this circumstance, the color space converter 124 would the YUV signals to corresponding RGB signals.
  • FIG. 7 illustrates a block diagram of a video system in accordance with a further embodiment of the present invention.
  • This version of the video system includes the converter 70, memory 72, sync generator 78, monitor 76, enhanced-video circuit 100, and PLL 104 as shown in FIG. 5.
  • the video system includes the color space converter 124 for converting the interpolated pixel signals from the enhanced-video circuit 100 into a plurality of output format signals.
  • the output format signals are passed to the monitor 76 which in response displays an image represented by the signals. Examples of possible output format signals are RGB signals and YCrCb signals.
  • the enhanced video circuit 100 and the PLL 104 allow the video system to vary the number of vertical scan lines in the output frame.
  • the PLL 104 generates at least one high-band sync signal from the field sync signal 92.
  • the high-band sync signal is phase-locked to the field sync signal and has a frequency which is a multiple of the field sync signal.
  • the PLL 104 provides the high-band sync signal to the enhanced-video circuit 100 and the color space converter 124.
  • FIG. 8 conceptually illustrates non-uniform interpolation performed in accordance with an embodiment of the present invention.
  • the video signal 80 received by the video system comprises a plurality of scan lines, four of which are shown in FIG. 8. Each scan line includes a plurality of color space signals.
  • the input scan lines are indexed, from k to k+1, according to their relative vertical positions in a frame.
  • the video system processes the input video signal to generate a corresponding plurality of output scan lines.
  • Each output scan line includes a plurality of interpolated pixel signals.
  • the color space signals in each pair of adjacent scan lines are interpolated to produce three output scan lines of interpolated pixel signals.
  • input scan lines k and k+1 constitute an adjacent pair of scan lines, and thusly contain a plurality of adjacent color space signals.
  • the three upper-most output scan lines are generated from input scan lines k and k+1 using non ⁇ uniform interpolation.
  • the output scan lines are depicted as being equally spaced; however, non-uniform interpolation can also be used to generate output scan lines having irregular spacing.
  • an adjacent pair of input scan lines can be non-uniformly interpolated to generate any number of corresponding output scan lines.
  • an NTSC signal which has approximately 485 scan lines per frame can be non- uniformly interpolated to generate output frames having 700, 800, 900, 1000, 1200, or 1920 scan lines.
  • FIG. 9 illustrates a graphical representation of linear, non-uniform interpolation performed in accordance with an embodiment of the present invention. It will be apparent to one of ordinary skill in the art that linear interpolation is a special case of non ⁇ linear interpolation. Linear, non-uniform interpolation is based on a function:
  • y ⁇ represents an interpolated pixel signal
  • d k and d k+ ⁇ represent a pair of adjacent color space signals
  • c ii k represents a first coefficient
  • c 2 i k represents a second coefficient
  • i and k are integer indices corresponding to the output scan lines and the input scan lines, respectively.
  • the interpolated pixel signal yi corresponds to adjacent color space signals di and d2 located in input scan lines k and k+1, respectively.
  • the variables x f X2, and 3 represent distances.
  • the coefficients are determined as follows:
  • Cii k ⁇ /x3 Equation 2
  • FIG. 10 illustrates a graphical representation of non-linear, non-uniform interpolation performed in accordance with an embodiment of the present invention.
  • FIG. 10 depicts 2nd-order non-linear interpolation based on a function:
  • yi represents an interpolated pixel signal
  • d k, d k+ ⁇ f and d k+2 represent three successive adjacent color space signals
  • Cm- represents a first coefficient
  • c 2ik represents a second coefficient
  • c 2 j .k represents a second coefficient
  • i and k are integer indices corresponding to the output scan lines and the input scan lines, respectively.
  • the coefficients can be construed as being weight values.
  • FIG. 10 represents 2nd-order non-linear interpolation
  • an embodiment of the present invention can use any n tn - order non-linear interpolation.
  • the non ⁇ linear interpolation can be based on an n tn -order polynomial expansion.
  • the interpolated pixel signal corresponds to adjacent color space signals d_, d2, and d3 located in input scan lines k, k+1, and k+2, respectively.
  • a quadratic interpolation function is applied to the three adjacent color space signals to obtain the interpolated pixel signal.
  • the variables xi, X2, 3, X4, X5/ and x ⁇ represent distances. The coefficients are determined as follows:
  • FIG. 11 illustrates a flow diagram of a method of using the video systems shown in FIGS. 4-7 to process a video signal.
  • the video signal 80 is transmitted to at least one receiver.
  • the video signal is received by a receiver.
  • the receiver incorporates a video system which embodies the present invention.
  • the receiver could be a consumer TV, projection TV, computer monitor, liquid crystal display (LCD) TV, LCD computer monitor, or any other means for receiving and displaying a visual image represented by an electronic signal.
  • the sync trigger signal 86 and sampling signal 82 are generated from the video signal 80.
  • the sync trigger signal 86 is then distributed to the memory 72 and the enhanced-video circuit 74,100 to coordinate the transfer of the color space signals from the memory 72 to the enhanced-video circuit 74, 100.
  • the sampling signal 82 is distributed to the converter 70 and the memory 72 to synchronize their operations.
  • the video signal 80 is converted to the plurality of color space signals representing an input frame.
  • the color space signals representing the input frame are stored in the memory 72.
  • non-uniform interpolation between adjacent color space signals is performed to generate a plurality of interpolated pixel signals which represent an output frame having a greater number of vertical scan lines than the input frame.
  • the non-uniform interpolation can be based on either linear or non ⁇ linear interpolation.
  • an image represented by the interpolated pixel signals is displayed by the monitor 76.
  • FIG. 12 is a detailed block diagram of the converter 70 shown in FIGS. 4-7.
  • the converter 70 includes an A/D converter 190, a signal converter 192, and a decoder 194.
  • the A/D converter 190 digitizes the video signal 80 into a corresponding plurality of binary-coded signals.
  • the signal converter 192 which is responsive to the binary-coded signals, generates a plurality of chrominance signals and a plurality of luminance signals.
  • the decoder 194 Upon receiving the chrominance and luminance signals, the decoder 194 generates the corresponding color space signals.
  • the operations of the A/D converter 190, the signal converter 192, and the decoder 194 are synchronized by the sampling signal 82.
  • FIG. 1 The operations of the A/D converter 190, the signal converter 192, and the decoder 194 are synchronized by the sampling signal 82.
  • the enhanced-video circuit is a detailed block diagram of one version of the enhanced-video circuits 74, 100 shown in FIGS. 4-7.
  • This version of the enhanced-video circuit can be used to compute non-uniform interpolations based on the function given in Equation 1.
  • the enhanced-video circuit comprises a plurality of interpolation circuits 208a-c, a memory 202, a control unit 200, a line buffer 204, a delay buffer 206, and an output buffer 218.
  • the enhanced-video circuit may include any number of interpolation circuits, it typically includes one interpolation circuit per component color. For example, only one interpolation circuit would be needed to perform non-uniform interpolation on a monochromatic video signal.
  • the exemplary enhanced-video circuit shown in FIG. 13 is intended to process a video signal having up to three color components, such as an RGB signal; thus, the circuit includes three interpolation circuits 208a-c.
  • the enhanced-video circuit is capable of performing non-uniform interpolation with any format of component video signals
  • the following discussion referring to FIGS. 11 and 12 will use, as an example, RGB signals to illustrate the functions of the various versions of the enhanced-video circuit.
  • the interpolation circuits 208a-c generate a plurality of interpolated pixel signals in response to a plurality of color space signals received on a data input bus.
  • the data input bus includes a red bus 224, a green bus 226, and a blue bus 228.
  • the red interpolation circuit 208a receives color space signals representing the red component of an RGB signal over the red bus 224; the green interpolation circuit 208b receives color space signals representing the green component of an RGB signal over the green bus 226; and the blue interpolation circuit 208c receives color space signals representing the blue component of an RGB signal over the blue bus 228.
  • Each interpolation circuit performs non-uniform interpolation between adjacent color space signals and includes at least one arithmetic circuit 210a-i for computing the non-uniform interpolation.
  • an interpolation circuit may comprise any number of arithmetic ciruits, in the given example each interpolation circuit includes three arithmetic circuits.
  • Each arithmetic circuit includes a first multiplier, a second multiplier, and an adder for producing an interpolated pixel signal.
  • the red interpolation circuit 208a includes three red arithmetic circuits 210a-c; the green interpolation circuit 208b includes three green arithmetic circuits 210d-f; and the blue interpolation circuit 208c includes three blue arithmetic circuits 210g-i.
  • the function of the arithmetic circuits 210a-i can be illustrated by referring to the first red arithmetic circuit 210a.
  • the first red arithmetic circuit 210a includes a first multiplier 212, a second multiplier 214, and an adder 216.
  • the first multiplier 212 multiplies a red component signal received on the red bus 224 with a coefficient to produce a first product signal.
  • the second multiplier 214 multiplies a stored red component with a coefficient to produce a second product signal.
  • the adder 216 sums the first and second product signals to generate an interpolated red pixel signal.
  • the coefficients typically have different values; however, under some circumstances, such as generating output scan lines that are equidistant from the input scan lines, they may have the same value.
  • the memory 202 provides a means for storing coefficients and provides at least one coefficient to the interpolation circuits 208a-c.
  • coefficients used in non-uniform interpolation of color space signals in the red component are passed across a red memory bus 238; coefficients used in non- uniform interpolation of color space signals in the green component are passed across a green memory bus 240; and coefficients used in non-uniform interpolation of color space signals in the blue component are passed across a blue memory bus 242.
  • the control unit 200 generates an address 236 usable by the memory 202 to retrieve at least one coefficient.
  • the control unit 200 generates the address 236 in response to receiving a scan line address 222 corresponding to the adjacent color space signals being interpolated.
  • the control unit 200 is programmable to vary the number of scan lines represented the interpolated pixel signals. This is accomplished by the control unit 200 receiving an instruction 220 and then decoding the instruction to select a different address offset value which is included in the address 236.
  • the address offset essentially points to a different memory space containing another set of coefficients.
  • the memory 202 stores sets of coefficients to generate output frames having 700, 800, 900, 1000, 1200, or 1920 scan lines.
  • the control unit 200 can also generate control signals which are passed to the line buffer 204, delay buffer 206, and output buffer 218. Such control signals can be used to coordinate the transfer of data, or they can also be used to initialize or reset the buffers. Additionally, the control unit 200 generates an output sync signal 234 which is used for transferring data across a first output bus 230 or a second output bus 232.
  • the line buffer 204 and the delay buffer 206 constitute a buffer for storing color space signals corresponding to a scan line.
  • the delay buffer 206 receives a sequence of color space signals representing a scan line.
  • the delay buffer transfers its contents to the line buffer 204.
  • the delay buffer 206 begins storing color space signal of the next scan line and the line buffer 204 holds the color space signals of the previously completed scan line.
  • the color space signals stored in the line buffer 204 are distributed to the interpolation circuits 208a-c across their respective buffer bus.
  • a red buffer bus 244 connects the line buffer 204 to the red interpolation circuit 208a.
  • a green buffer bus 246 connects the line buffer 204 to the green interpolation circuit 208b.
  • a blue buffer bus 248 connects the line buffer 204 to the red interpolation circuit 208c.
  • the line buffer 204 and the delay buffer 206 act as a double-buffer that stores color space signals of adjacent scan lines.
  • the output buffer 218 receives interpolated pixel signals from the interpolation circuits 208a-c and transmits interpolated pixel signals of a current output scan line on the output buses 230-232. Interpolated pixels that are not part of the current output scan line are temporarily stored in the output buffer 218. Each of the output buses 230-232 can concurrently transmit the red, blue, and green interpolated pixel signals the RGB signal. Two output buses are provided to increase the bandwidth of the output. Generally, the output scan lines are transmitted at a higher frequency than the input scan lines.
  • the output buffer 218 may optionally include a means (not shown) for interpolating between adjacent pixels within a scan line to produce a greater number of pixels in the output scan line.
  • Interpolation performed in the output buffer 218 may be either linear or non- linear non-uniform interpolation.
  • the interpolation may be based on either Equation 1 or 4.
  • the means for interpolating generates horizontally interpolated pixels by simply averaging two adjacent pixels. By interpolating within scan lines, i.e., performing two- dimensional interpolation, the definition of an image represented by the video signal can be further enhanced.
  • FIG. 14 is a detailed block diagram of an alternative version of the enhanced-video circuit shown in FIGS. 4-7 in accordance with one embodiment of the present invention.
  • This version of the enhanced-video circuit can be used to compute non-uniform interpolations based on the function given in Equation 4.
  • the enhanced-video circuit comprises a plurality of interpolation circuits 272a-c, a memory 202, a control unit 200, a first line buffer 266, a second line buffer 268, a delay buffer 206, and an output buffer 218.
  • the enhanced-video circuit may include any number of interpolation circuits, it typically includes one interpolation circuit per component color. For example, only one interpolation circuit would be needed to perform non-uniform interpolation on a monochromatic video signal.
  • the exemplary enhanced-video circuit shown in FIG. 14 is intended to process a video signal having up to three color components, such as an RGB signal; thus, the circuit includes three interpolation circuits 272a-c.
  • the interpolation circuits 272a-c generate a plurality of interpolated pixel signals in response to a plurality of color space signals received on data input bus. Each of the interpolation circuits 272a-c is capable of concurrently generating up to three interpolated pixel signals.
  • the data input bus includes a red bus 224, a green bus 226, and a blue bus 228.
  • the red interpolation circuit 272a receives color space signals representing the red component of an RGB signal over the red bus 224;
  • the green interpolation circuit 272b receives color space signals representing the green component of an RGB signal over the green bus 226;
  • the blue interpolation circuit 272c receives color space signals representing the blue component of an RGB signal over the blue bus 228.
  • Each interpolation circuit performs a non-linear, non-uniform interpolation between adjacent color space signals and includes at least one arithmetic circuit 274a-i for computing the non-uniform interpolation.
  • an interpolation circuit may comprise any number of arithmetic circuits, in the given example each interpolation circuit includes three arithmetic circuits.
  • Each arithmetic circuit includes a first multiplier, a second multiplier, a third multiplier, a first adder, and a second adder for producing an interpolated pixel signal.
  • the red interpolation circuit 272a includes three red arithmetic circuits 274a-c; the green interpolation circuit 272b includes three green arithmetic circuits 274d-f; and the blue interpolation circuit 272c includes three blue arithmetic circuits 274g-i.
  • the function of the arithmetic circuits 274a-i can be illustrated by referring to the first red arithmetic circuit 274a.
  • the first red arithmetic circuit 274a includes a first multiplier 286, a second multiplier 288, a third multiplier 290, a first adder 284, and a second adder 282.
  • the first multiplier multiplies a red component signal with a coefficient to produce a first product signal.
  • the second multiplier 288 multiplies a first stored red component with a coefficient to produce a second product signal.
  • the first adder 284 sums the first product signal and the second product signal to generate a first sum signal.
  • the third multiplier 290 multiplies a second stored red component signal with a coefficient to generate a third product signal.
  • the second adder 282 sums the first sum signal and third product signal to produce an interpolated red pixel signal.
  • the coefficients typically have different values; however, under some circumstances, such as generating output scan lines that are equidistant from the input scan lines, they may have the same value.
  • the memory 202 provides a means for storing coefficients and provides at least one coefficient to the interpolation circuits 272a-c.
  • coefficients used in non-linear, non-uniform interpolation of color space signals in the red component are passed across a red memory bus 276, while coefficients for color space signals in the green component are passed across a green memory bus 278, and coefficients for color space signals in the blue component are passed across a blue memory bus 280.
  • the control unit 200 generates an address 236 usable by the memory 202 to retrieve at least one coefficient.
  • the control unit 200 generates the address 236 in response to receiving a scan line address 222 corresponding to the adjacent color space signals being interpolated.
  • the control unit 200 is programmable to vary the number of scan lines represented the interpolated pixel signals. This is accomplished by the control unit 200 receiving an instruction 220 and then decoding the instruction to select a different address offset value which is included in the address 236.
  • the address offset essentially points to a different memory space containing another set of coefficients.
  • the memory 202 stores sets of coefficients to generate output frames having 700, 800, 900, 1000, 1200, or 1920 lines per frame.
  • the control unit 200 can also generate control signals which are passed to the first line buffer 266, second line buffer 268, delay buffer 206, and output buffer 218. Such control signals can be used to coordinate the transfer of data, or they can also be used to initialize or reset the buffers. Additionally, the control unit 200 generates an output sync signal 234 which is used for transferring data across a first output bus 230 or a second output bus 232.
  • the first line buffer 266, the second line buffer 268, and the delay buffer 206 constitute a buffer for storing color space signals corresponding to three consecutive scan lines.
  • the delay buffer 206 receives a sequence of color space signals representing a scan line.
  • the delay buffer 206 transfers its contents to the second line buffer 268.
  • the delay buffer 206 begins storing color space signal of the next scan line and the second line buffer 268 holds the color space signals of the previously completed scan line.
  • the contents of the second line buffer 268 are shifted to the first line buffer 266 and the contents of the delay buffer are shifted into the second line buffer 268.
  • the first and second line buffer contain the color space signals of two adjacent scan lines.
  • the color space signals stored in the first line buffer 266 and the second line buffer 268 are distributed to the interpolation circuits 272a-c across their respective buffer bus.
  • a first red buffer bus 281 connects the first line buffer 266 to the red interpolation circuit 272a
  • a second red buffer bus 287 connects the second line buffer 268 to the red interpolation circuit 272a.
  • a first green buffer bus 283 connects the first line buffer 266 to the green interpolation circuit 272b
  • a second green buffer bus 289 connects the second line buffer 268 to the green interpolation circuit 272b.
  • a first blue buffer bus 285 connects the first line buffer 266 to the blue interpolation circuit 272c, whereas a second blue buffer bus 291 connects the second line buffer 268 to the blue interpolation circuit 272c.
  • the first line buffer 266, the second line buffer 268, and the delay buffer 206 act as a triple-buffer that stores color space signals of three adjacent scan lines.
  • the output buffer 218 receives interpolated pixel signals from the interpolation circuits 272a-c and transmits interpolated pixel signals of a current output scan line on the output buses 230-232. Interpolated pixels that are not part of the current output scan line are temporarily stored in the output buffer 218. Each of the output buses 230-232 can concurrently transmit the red, blue, and green interpolated pixel signals the RGB signal. Two output buses are provided to increase the bandwidth of the output. Generally, the output scan lines are transmitted at a higher frequency than the input scan lines.
  • the output buffer 218 may optionally include a means (not shown) for interpolating between adjacent pixels within a scan line to produce a greater number of pixels in the output scan line.
  • Interpolation performed in the output buffer 218 may be either linear or non- linear non-uniform interpolation.
  • the means for interpolating generates horizontally interpolated pixels by simply averaging two adjacent pixels. By interpolating within scan lines, i.e., performing two-dimensional interpolation, the definition of an image represented by the video signal can be further enhanced.
  • FIG. 15 is a flow diagram of a method of using the enhanced-video circuits shown in FIGS. 13-14 to generate a plurality of interpolated pixel signals.
  • box 300 a plurality of adjacent color space signals is received on the data input bus.
  • At least one coefficient corresponding to the adjacent color space signals is select from the memory 202. This is accomplished when the scan line address 222 corresponding to the adjacent color space signals is received and decoded by the control unit 200 to generate the address 236. The coefficients stored at the address 236 are then retrieved from the memory 202. By decoding the instruction 220, the control unit 200 can produce an address offset which is used to select a different set of coefficients. By selecting a different set of coefficients, the control unit 200 can, in effect, select a different number of scan lines represented by the interpolated pixel signals.
  • non-uniform interpolation is performed between the adjacent color space signals using the retrieved coefficients to generate the plurality of interpolated pixel signals.
  • the enhanced-video circuit shown in FIG. 13 performs linear interpolation which is based on the function given in Equation 1, while the enhanced-video circuit shown in FIG. 14 performs 2nd- order non-linear interpolation which is based on the function given in Equation 4.
  • FIG. 16 illustrates a flow diagram of a method of processing an interlaced video signal to generate a high-resolution video signal.
  • the video systems depicted in FIGS. 4-7 can be employed to perform this method. The method results in generating a high- resolution video signal having a greater number of scan lines than the interlaced video signal.
  • an interlaced video signal having two consecutive fields is received.
  • the interlaced video signal can be formatted according to conventional television transmission standards such as PAL, NTSC, or SECAM. In such a signal, one of the two consecutive fields has even scan lines and the other field has odd scan lines. In most circumstances, the interlaced video signal is a continuous signal which includes a sequence of more than two fields.
  • the interlaced video signal is digitized to produce a digital video signal having a plurality of digitized fields corresponding to the two consecutive fields.
  • a digital video signal having a plurality of digitized fields corresponding to the two consecutive fields.
  • this method there are many applications of this method in which it is desirable to produce two or more digitized fields from a single interlaced field, for instance, when separately manipulating portions of an image represented by the interlaced field.
  • the digitized fields are merged to produce a frame which includes the even scan lines and the odd scan lines.
  • Merging fields typically entails storing a first received field and then combining it with a subsequently received field.
  • the method presented herein is not limited to a particular process for merging fields.
  • non-uniform interpolation is performed between adjacent scan lines in the frame to generate the high-resolution video signal.
  • the high- resolution video signal has a greater number of vertical scan lines than the interlaced video signal.
  • an image represented by the high- resolution video signal is displayed on a monitor, such as the monitor 76 depicted in FIGS. 4-7.
  • the steps in boxes 360-368 can be repeated to generate a plurality of frames, and thus a corresponding plurality of images.
  • a motion picture, represented by the high-resolution video signal, can be rendered by letting each of the frames correspond to a different consecutive pair of the fields.
  • FIG. 17 illustrates a flow diagram of a method of processing an interlaced color video signal to generate a high-resolution video signal in accordance with one embodiment of the present invention.
  • the method of FIG. 17 also includes the step given in box 374.
  • the digital video signal is decoded into a plurality of color component signals.
  • the color component signals represent the components of a color space. For example, in the RGB color space, one of the color component signals represents the red space, another represents the green space, and a third represents the blue space.
  • the steps in boxes 370-380 can be repeated generate a plurality of frames, and thus a corresponding plurality of images.
  • a color motion picture, represented by the high-resolution video signal, can be rendered by letting each of the frames correspond to a different consecutive pair of the fields.
  • FIG. 18 illustrates a contextual diagram of a broadcasting system which employs at least one of the video systems depicted in FIGS. 4-7.
  • the broadcasting system includes a broadcasting station 420 and a receiver 424.
  • the broadcasting station 420 includes a transmitter 421 that emits a video signal 422 which travels via the atmosphere to the receiver 424.
  • the transmitter 421 can include a ground based antenna, microwave relay, or satellite.
  • the video signal can include broadcast information formatted according to conventional television transmission standards, such as NTSC, PAL, SECAM, or any variation of these standards.
  • the receiver 424 includes an embodiment of the present invention and may be construed as any means for receiving the video signal 422 and displaying a transmitted image.
  • the receiver 424 could include a color television receiver, a projection screen TV, or a computer.
  • FIG. 19 illustrates a contextual diagram of a cable broadcasting system which employs at least one of the video systems depicted in FIGS. 4-7.
  • the cable broadcasting system includes a video source 430, a transmission medium 432, such as a coaxial cable, and a receiver 434.
  • the video source 430 includes a transmitter that emits a video signal which travels via the transmission medium 432 to the receiver 434.
  • the video source 430 can include a video cassette player, video camera that plays back images, or a CD ROM.
  • the video signal can include broadcast information formatted according to conventional television transmission standards, such as NTSC, PAL, SECAM, or any variation of these standards.
  • the receiver 434 may be any means, which includes an embodiment of the present invention, for receiving the video signal and displaying a transmitted image.
  • the receiver 434 could include a television, a projection screen TV, or a computer.
  • the various embodiments of the video system, and the method of using same, as herein-described use non ⁇ uniform interpolation to increase the number of scan lines in a video signal in real-time, they produce a video image of vastly improved quality.
  • the various embodiments of the video system include a converter that accepts real-time video signals formatted according to conventional NTSC standards, they are capable generating improved NTSC video images that can be displayed on high-resolution computer monitors.
  • the various embodiments of the video system include an enhanced-video circuit for performing non-uniform interpolation which is inexpensive and practical to implement using an integrated circuit, they can be incorporated into consumer television receivers.

Abstract

A real-time video system which performs non-uniform interpolation between adjacent vertical scan lines is presented. The video system includes a converter (70), a memory (72), an enhanced-video circuit (74), and a sync generator (78). The video system decodes and digitizes an analog composite video signal, such as an NTSC, PAL, or SECAM signal, and generates a digital video signal having a greater number of vertical scan lines than the analog video signal. The video system is programmable to allow a different number of scan lines in the output digital video signal.

Description

NON-LINEAR INTERPOLATION FOR COLOR VIDEO LINE CONVERSION
Related Inventions
The present invention is related to the following inventions which are assigned to the same assignee as the present invention:
(1) "Method of Generating High-Resolution Video", having Serial No. 08/496,793, filed concurrently herewith; and
(2) "Circuit for Interpolating Scan Lines of a Video Signal and Method of Using Same", having Serial No. 08/496,795, filed concurrently herewith.
The subject matter of the above-identified related inventions is hereby incorporated by reference into the disclosure of this invention.
Technical Field
The present invention relates generally to video systems and, in particular, to a video system which converters analog video signals into digital video signals . Background of the Invention
The first color television system was developed in the United States, and in December 1953 the Federal Communications Commission (FCC) approved the transmission standard. Most of the work for developing a color transmission standard was done by the National Television System Committee (NTSC) . The NTSC standard provides a format for broadcasting a video signal having* 525 scan lines (485 representing pixels) , 60 fields/second, and 2:1 interlacing. Television broadcasts in the United States, Japan, and many other countries currently adhere to the NTSC standard.
The European equivalent of the NTSC standard is the Phase Alternation Line (PAL) standard that calls for 625 scan lines, 50 fields/second, and 2:1 interlacing. Another European standard, SECAM, was developed in France during the 1960s. Like PAL, SECAM is a 625 scan line, 50 field/second, 2:1 interlaced system. The three standards, NTSC, PAL, and SECAM, provide analog video sources having an interlaced display format, i.e., each frame of video is scanned out as two fields that are separated temporally and offset spatially in the vertical direction. FIG. 1 illustrates the temporal separation of the fields in an interlaced video signal. The video signal consists of a sequence of alternating even and odd fields separated by a period of time for synchronizing the fields. The even field contains every other scan line in the frame, or the even-numbered scan lines, while the odd field contains the odd-number scan lines. Thus, an NTSC field contains 262.5 scan lines. FIG. 2 shows an example of a raster displaying an odd field. FIG. 3 shows an example of a waveform of an NTSC composite video signal. The waveform shown represents two scan lines. The waveform includes a horizontal sync pulse 50 and a color burst 52 for each scan line. There has been much discussion concerning the introduction of new television standards, such as High- Definition Television (HDTV) , to improve the quality of transmitted images and audio. New transmission standards will require a substantial investment in new equipment by both broadcasters and consumers. A television receiver which performs as well as an HDTV set under the present broadcasting standards would effectively achieve the same goal as the proposed standards without requiring consumers and broadcasters to pay an enormous conversion cost.
Thus, there is a need for a video system and method that generates high definition images from video signals broadcast using standard television transmission formats, such as NTSC, PAL, or SECAM.
Brief Description of the Drawings
The invention is pointed out with particularity in the appended claims. However, other features of the invention will become more apparent and the invention will be best understood by referring to the following detailed description in conjunction with the accompanying drawings in which:
FIG. 1 illustrates a temporal block diagram of an interlaced video signal.
FIG. 2 illustrates a graphical representation of a raster represented by the interlaced video signal of FIG. 1. FIG. 3 illustrates a graphical representation of a waveform segment of an NTSC video signal.
FIG. 4 illustrates a block diagram representation of a video system in accordance with one embodiment of the present invention.
FIG. 5 illustrates a block diagram representation of a video system in accordance with a preferred embodiment of the present invention.
FIG. 6 illustrates a block diagram representation of a video system in accordance with another embodiment of the present invention.
FIG. 7 illustrates a block diagram representation of a video system in accordance with a further embodiment of the present invention. FIG. 8 conceptually illustrates non-uniform interpolation performed in accordance with an embodiment of the present invention.
FIG. 9 illustrates a graphical representation of linear, non-uniform interpolation performed in accordance with an embodiment of the present invention.
FIG. 10 illustrates a graphical representation of non-linear, non-uniform interpolation performed in accordance with an embodiment of the present invention.
FIG. 11 illustrates a flow diagram of a method of using the video systems shown in FIGS. 4-7.
FIG. 12 is a detailed block diagram of the converter shown in FIGS. 4-7.
FIG. 13 is a detailed block diagram of the enhanced-video circuit shown in FIGS. 4-7. FIG. 14 is a detailed block diagram of an alternative version of the enhanced-video circuit shown in FIGS. 4-7 in accordance with one embodiment of the present invention. FIG. 15 is a flow diagram of a method of using the enhanced-video circuits shown in FIGS. 13-14.
FIG. 16 illustrates a flow diagram of a method of processing an interlaced video signal to generate a high-resolution video signal.
FIG. 17 illustrates a flow diagram of a method of processing an interlaced color video signal to generate a high-resolution video signal in accordance with one embodiment of the present invention.. FIG. 18 illustrates a contextual diagram of a broadcasting system which employs at least one of the video systems depicted in FIGS. 4-7.
FIG. 19 illustrates a contextual diagram of a cable broadcasting system which employs at least one of the video systems depicted in FIGS. 4-7.
Detailed Description of a Preferred Embodiment
It is an advantage of the present invention to provided a video system which increases the number of scan lines in a video signal in real-time, resulting in a video image of vastly improved quality. It is also an advantage of the present invention to provide a video system that accepts real-time video signals formatted according to conventional NTSC standards and in turn generates improved video images that can be displayed on a high-resolution computer monitor. Another advantage of the present invention is that it provides a video system that can be easily incorporated into consumer television receivers, such as large-screen projection TVs. A further advantage of the present invention is that it provides a method for processing a video signal to produce a corresponding output video signal having a greater number of scan lines. Referring now to FIG. 4, a block diagram of a video system in accordance with one embodiment of the present invention is shown. The video system includes a converter 70, a memory 72, an enhanced-video circuit 74, a monitor 76, and a sync generator 78. The converter 70 provides a means for converting a video signal 80 to a plurality of color space signals. The format of the video signal 80 can be based on conventional standards for television transmission, such as NTSC, PAL, or SECAM. The color space signals can be construed as binary words that represent values in a given color space, such as a YIQ, YUV, or RGB color space. The color space signals are passed from the converter 70 to the memory 72. The memory 72 stores the plurality of color space signals corresponding to an input frame, and, in turn, provides the color space signals as output. The enhanced-video circuit 74 receives color space signals from the memory 72 and performs non¬ uniform interpolation between adjacent color space signals. As a result of performing the non-uniform interpolation, the enhanced-video circuit 74 generates a plurality of interpolated pixel signals which represent an output frame having a greater number of vertical scan lines than the input frame. The interpolated pixel signals can be construed as binary words representing values in the same color space as the color space signals.
The sync generator 78 generates a sync trigger signal 86, a sampling signal 82, and a field sync signal 92. All of these signals are generated from the video signal 80. The sync trigger signal 86 is distributed to the memory 72 and the enhanced-video circuit 74 for coordinating the transfer of the color space signals from the memory 72 to the enhanced-video circuit 74. The sampling signal 82 synchronizes the operations of the converter 70 and the memory 72. If the video signal 80 is an NTSC signal, the frequencies of the sampling signal 82, the sync trigger signal 86, and the field sync signal 92 are approximately 12.27 MHz, 24.54 MHz, and 60 Hz, respectively.
The monitor 76 displays an image represented by the interpolated pixel signals that it receives from the enhanced-video circuit 74. Generally, the monitor 76 is any means for receiving and displaying a visual image represented by an electronic signal. For instance, the monitor 76 could include a consumer TV, a projection TV, a computer monitor, or a liquid crystal display (LCD) . FIG. 5 illustrates a block diagram of a video system in accordance with a preferred embodiment of the present invention. The video system includes the converter 70, memory 72, sync generator 78, and monitor 76 shown in FIG. 4. In addition, the preferred video system includes a PLL 104 (phase locked loop) and an enhanced-video circuit 100 which allow the video system to vary the number of vertical scan lines in the output frame. The PLL 104 generates at least one high-band sync signal from the field sync signal 92. The PLL 104 can be either an analog or digital PLL. The PLL 104 provides the high-band sync signal to the enhanced-video circuit 100. The high-band sync signal is used to transfer interpolated pixel signals from the enhanced video circuit 100. FIG. 6 illustrates a block diagram of a video system in accordance with another embodiment of the present invention. Like the video system in FIG. 4, the video system of FIG. 6 includes the converter 70, the memory 72, the enhanced-video circuit 74, the sync generator 78, and the monitor 76. However, in addition to these elements, the video system of FIG. 6 includes a color space converter 124 for converting the interpolated pixel signals from the enhanced-video circuit 74 into a plurality of output format signals. The output format signals are passed to the monitor 76 which in response displays an image represented by the signals. Examples of possible output format signals are RGB signals and YCrCb signals. The color space converter 124 is useful when the color spaces of the color space signals and the monitor 76 are different. For example, the converter 70 may generate as output a plurality of YUV signals, whereas the monitor 76 responses to RGB signals. In this circumstance, the color space converter 124 would the YUV signals to corresponding RGB signals.
FIG. 7 illustrates a block diagram of a video system in accordance with a further embodiment of the present invention. This version of the video system includes the converter 70, memory 72, sync generator 78, monitor 76, enhanced-video circuit 100, and PLL 104 as shown in FIG. 5. In addition, the video system includes the color space converter 124 for converting the interpolated pixel signals from the enhanced-video circuit 100 into a plurality of output format signals. The output format signals are passed to the monitor 76 which in response displays an image represented by the signals. Examples of possible output format signals are RGB signals and YCrCb signals. The enhanced video circuit 100 and the PLL 104 allow the video system to vary the number of vertical scan lines in the output frame. The PLL 104 generates at least one high-band sync signal from the field sync signal 92. The high-band sync signal is phase-locked to the field sync signal and has a frequency which is a multiple of the field sync signal. The PLL 104 provides the high-band sync signal to the enhanced-video circuit 100 and the color space converter 124. FIG. 8 conceptually illustrates non-uniform interpolation performed in accordance with an embodiment of the present invention. The video signal 80 received by the video system comprises a plurality of scan lines, four of which are shown in FIG. 8. Each scan line includes a plurality of color space signals. The input scan lines are indexed, from k to k+1, according to their relative vertical positions in a frame. The video system processes the input video signal to generate a corresponding plurality of output scan lines. Each output scan line includes a plurality of interpolated pixel signals. In the example shown, the color space signals in each pair of adjacent scan lines are interpolated to produce three output scan lines of interpolated pixel signals. For instance, input scan lines k and k+1 constitute an adjacent pair of scan lines, and thusly contain a plurality of adjacent color space signals. The three upper-most output scan lines are generated from input scan lines k and k+1 using non¬ uniform interpolation. The output scan lines are depicted as being equally spaced; however, non-uniform interpolation can also be used to generate output scan lines having irregular spacing. Furthermore, an adjacent pair of input scan lines can be non-uniformly interpolated to generate any number of corresponding output scan lines. For example, an NTSC signal, which has approximately 485 scan lines per frame can be non- uniformly interpolated to generate output frames having 700, 800, 900, 1000, 1200, or 1920 scan lines. FIG. 9 illustrates a graphical representation of linear, non-uniform interpolation performed in accordance with an embodiment of the present invention. It will be apparent to one of ordinary skill in the art that linear interpolation is a special case of non¬ linear interpolation. Linear, non-uniform interpolation is based on a function:
Cii *dk + c2ik*dk+1 Equation 1
where y± represents an interpolated pixel signal; dk and dk+ι represent a pair of adjacent color space signals; ciik represents a first coefficient; c2ik represents a second coefficient; and i and k are integer indices corresponding to the output scan lines and the input scan lines, respectively.
The coefficients can be construed as being weight values where Ciik + c2ik = 1, and 0 < cm- < 1 and 0 < c2i < 1. In FIG. 9, the interpolated pixel signal yi corresponds to adjacent color space signals di and d2 located in input scan lines k and k+1, respectively. The variables x f X2, and 3 represent distances. The coefficients are determined as follows:
Ciik = ι/x3 Equation 2
c2ik = χ2 χ3 Equation 3
FIG. 10 illustrates a graphical representation of non-linear, non-uniform interpolation performed in accordance with an embodiment of the present invention. FIG. 10 depicts 2nd-order non-linear interpolation based on a function:
Yi = Cιik*dk + c2ik*dk+ι + c3ik*dk+2 Equation 4
where yi represents an interpolated pixel signal; dk, dk+ιf and dk+2 represent three successive adjacent color space signals; Cm- represents a first coefficient; c2ik represents a second coefficient; c2j.k represents a second coefficient; and i and k are integer indices corresponding to the output scan lines and the input scan lines, respectively. The coefficients can be construed as being weight values. Although FIG. 10 represents 2nd-order non-linear interpolation, an embodiment of the present invention can use any ntn- order non-linear interpolation. In addition, the non¬ linear interpolation can be based on an ntn-order polynomial expansion.
In FIG. 10, the interpolated pixel signal corresponds to adjacent color space signals d_, d2, and d3 located in input scan lines k, k+1, and k+2, respectively. Essentially, a quadratic interpolation function is applied to the three adjacent color space signals to obtain the interpolated pixel signal. The variables xi, X2, 3, X4, X5/ and xβ represent distances. The coefficients are determined as follows:
Cii ~ (χ6 * 3) / ( 5 * i) Equation 5
c2ik = (X6 * X4)/(x2 * i) Equation 6
C3ik = (X * X3) / (X5 * X2) Equation 7 FIG. 11 illustrates a flow diagram of a method of using the video systems shown in FIGS. 4-7 to process a video signal. In box 170, the video signal 80 is transmitted to at least one receiver. In box 172, the video signal is received by a receiver. The receiver incorporates a video system which embodies the present invention. For instance, the receiver could be a consumer TV, projection TV, computer monitor, liquid crystal display (LCD) TV, LCD computer monitor, or any other means for receiving and displaying a visual image represented by an electronic signal.
In box 174, the sync trigger signal 86 and sampling signal 82 are generated from the video signal 80. The sync trigger signal 86 is then distributed to the memory 72 and the enhanced-video circuit 74,100 to coordinate the transfer of the color space signals from the memory 72 to the enhanced-video circuit 74, 100. Also, the sampling signal 82 is distributed to the converter 70 and the memory 72 to synchronize their operations. In box 176, the video signal 80 is converted to the plurality of color space signals representing an input frame. Next, in box 178, the color space signals representing the input frame are stored in the memory 72. Although the memory 72 could be made large enough to store the color space signals of an entire frame, one of ordinary skill in the art will realize that if the video signal 80 is interlaced with two fields, then the memory 72 needs only to store the color space signals corresponding to one of the fields. In box 180, non-uniform interpolation between adjacent color space signals is performed to generate a plurality of interpolated pixel signals which represent an output frame having a greater number of vertical scan lines than the input frame. The non-uniform interpolation can be based on either linear or non¬ linear interpolation.
In box 182, an image represented by the interpolated pixel signals is displayed by the monitor 76.
FIG. 12 is a detailed block diagram of the converter 70 shown in FIGS. 4-7. The converter 70 includes an A/D converter 190, a signal converter 192, and a decoder 194. The A/D converter 190 digitizes the video signal 80 into a corresponding plurality of binary-coded signals. The signal converter 192, which is responsive to the binary-coded signals, generates a plurality of chrominance signals and a plurality of luminance signals. Upon receiving the chrominance and luminance signals, the decoder 194 generates the corresponding color space signals. The operations of the A/D converter 190, the signal converter 192, and the decoder 194 are synchronized by the sampling signal 82. FIG. 13 is a detailed block diagram of one version of the enhanced-video circuits 74, 100 shown in FIGS. 4-7. This version of the enhanced-video circuit can be used to compute non-uniform interpolations based on the function given in Equation 1. The enhanced-video circuit comprises a plurality of interpolation circuits 208a-c, a memory 202, a control unit 200, a line buffer 204, a delay buffer 206, and an output buffer 218. Although the enhanced-video circuit may include any number of interpolation circuits, it typically includes one interpolation circuit per component color. For example, only one interpolation circuit would be needed to perform non-uniform interpolation on a monochromatic video signal. The exemplary enhanced-video circuit shown in FIG. 13 is intended to process a video signal having up to three color components, such as an RGB signal; thus, the circuit includes three interpolation circuits 208a-c.
Although it will be realized by one skilled in the art that the enhanced-video circuit is capable of performing non-uniform interpolation with any format of component video signals, the following discussion referring to FIGS. 11 and 12 will use, as an example, RGB signals to illustrate the functions of the various versions of the enhanced-video circuit. The interpolation circuits 208a-c generate a plurality of interpolated pixel signals in response to a plurality of color space signals received on a data input bus. The data input bus includes a red bus 224, a green bus 226, and a blue bus 228. In the example shown, the red interpolation circuit 208a receives color space signals representing the red component of an RGB signal over the red bus 224; the green interpolation circuit 208b receives color space signals representing the green component of an RGB signal over the green bus 226; and the blue interpolation circuit 208c receives color space signals representing the blue component of an RGB signal over the blue bus 228.
Each interpolation circuit performs non-uniform interpolation between adjacent color space signals and includes at least one arithmetic circuit 210a-i for computing the non-uniform interpolation. Although an interpolation circuit may comprise any number of arithmetic ciruits, in the given example each interpolation circuit includes three arithmetic circuits. Each arithmetic circuit includes a first multiplier, a second multiplier, and an adder for producing an interpolated pixel signal. For instance, the red interpolation circuit 208a includes three red arithmetic circuits 210a-c; the green interpolation circuit 208b includes three green arithmetic circuits 210d-f; and the blue interpolation circuit 208c includes three blue arithmetic circuits 210g-i.
The function of the arithmetic circuits 210a-i can be illustrated by referring to the first red arithmetic circuit 210a. As shown, the first red arithmetic circuit 210a includes a first multiplier 212, a second multiplier 214, and an adder 216. The first multiplier 212 multiplies a red component signal received on the red bus 224 with a coefficient to produce a first product signal. The second multiplier 214 multiplies a stored red component with a coefficient to produce a second product signal. The adder 216 sums the first and second product signals to generate an interpolated red pixel signal. The coefficients typically have different values; however, under some circumstances, such as generating output scan lines that are equidistant from the input scan lines, they may have the same value. The memory 202 provides a means for storing coefficients and provides at least one coefficient to the interpolation circuits 208a-c. In the example shown, coefficients used in non-uniform interpolation of color space signals in the red component are passed across a red memory bus 238; coefficients used in non- uniform interpolation of color space signals in the green component are passed across a green memory bus 240; and coefficients used in non-uniform interpolation of color space signals in the blue component are passed across a blue memory bus 242. The control unit 200 generates an address 236 usable by the memory 202 to retrieve at least one coefficient. The control unit 200 generates the address 236 in response to receiving a scan line address 222 corresponding to the adjacent color space signals being interpolated. The control unit 200 is programmable to vary the number of scan lines represented the interpolated pixel signals. This is accomplished by the control unit 200 receiving an instruction 220 and then decoding the instruction to select a different address offset value which is included in the address 236. The address offset essentially points to a different memory space containing another set of coefficients. In one version of the enhanced-video circuit, the memory 202 stores sets of coefficients to generate output frames having 700, 800, 900, 1000, 1200, or 1920 scan lines.
The control unit 200 can also generate control signals which are passed to the line buffer 204, delay buffer 206, and output buffer 218. Such control signals can be used to coordinate the transfer of data, or they can also be used to initialize or reset the buffers. Additionally, the control unit 200 generates an output sync signal 234 which is used for transferring data across a first output bus 230 or a second output bus 232.
The line buffer 204 and the delay buffer 206 constitute a buffer for storing color space signals corresponding to a scan line. The delay buffer 206 receives a sequence of color space signals representing a scan line. Upon receiving an a sequence corresponding to a complete scan line, the delay buffer transfers its contents to the line buffer 204. At this point, the delay buffer 206 begins storing color space signal of the next scan line and the line buffer 204 holds the color space signals of the previously completed scan line. The color space signals stored in the line buffer 204 are distributed to the interpolation circuits 208a-c across their respective buffer bus. A red buffer bus 244 connects the line buffer 204 to the red interpolation circuit 208a. A green buffer bus 246 connects the line buffer 204 to the green interpolation circuit 208b. A blue buffer bus 248 connects the line buffer 204 to the red interpolation circuit 208c. In essence, the line buffer 204 and the delay buffer 206 act as a double-buffer that stores color space signals of adjacent scan lines.
The output buffer 218 receives interpolated pixel signals from the interpolation circuits 208a-c and transmits interpolated pixel signals of a current output scan line on the output buses 230-232. Interpolated pixels that are not part of the current output scan line are temporarily stored in the output buffer 218. Each of the output buses 230-232 can concurrently transmit the red, blue, and green interpolated pixel signals the RGB signal. Two output buses are provided to increase the bandwidth of the output. Generally, the output scan lines are transmitted at a higher frequency than the input scan lines. The output buffer 218 may optionally include a means (not shown) for interpolating between adjacent pixels within a scan line to produce a greater number of pixels in the output scan line. Interpolation performed in the output buffer 218 may be either linear or non- linear non-uniform interpolation. For example, the interpolation may be based on either Equation 1 or 4. In one embodiment of the present invention, the means for interpolating generates horizontally interpolated pixels by simply averaging two adjacent pixels. By interpolating within scan lines, i.e., performing two- dimensional interpolation, the definition of an image represented by the video signal can be further enhanced.
FIG. 14 is a detailed block diagram of an alternative version of the enhanced-video circuit shown in FIGS. 4-7 in accordance with one embodiment of the present invention. This version of the enhanced-video circuit can be used to compute non-uniform interpolations based on the function given in Equation 4. The enhanced-video circuit comprises a plurality of interpolation circuits 272a-c, a memory 202, a control unit 200, a first line buffer 266, a second line buffer 268, a delay buffer 206, and an output buffer 218. Although the enhanced-video circuit may include any number of interpolation circuits, it typically includes one interpolation circuit per component color. For example, only one interpolation circuit would be needed to perform non-uniform interpolation on a monochromatic video signal. The exemplary enhanced-video circuit shown in FIG. 14 is intended to process a video signal having up to three color components, such as an RGB signal; thus, the circuit includes three interpolation circuits 272a-c.
The interpolation circuits 272a-c generate a plurality of interpolated pixel signals in response to a plurality of color space signals received on data input bus. Each of the interpolation circuits 272a-c is capable of concurrently generating up to three interpolated pixel signals. The data input bus includes a red bus 224, a green bus 226, and a blue bus 228. In the example shown, the red interpolation circuit 272a receives color space signals representing the red component of an RGB signal over the red bus 224; the green interpolation circuit 272b receives color space signals representing the green component of an RGB signal over the green bus 226; and the blue interpolation circuit 272c receives color space signals representing the blue component of an RGB signal over the blue bus 228. Each interpolation circuit performs a non-linear, non-uniform interpolation between adjacent color space signals and includes at least one arithmetic circuit 274a-i for computing the non-uniform interpolation. Although an interpolation circuit may comprise any number of arithmetic circuits, in the given example each interpolation circuit includes three arithmetic circuits. Each arithmetic circuit includes a first multiplier, a second multiplier, a third multiplier, a first adder, and a second adder for producing an interpolated pixel signal. For instance, the red interpolation circuit 272a includes three red arithmetic circuits 274a-c; the green interpolation circuit 272b includes three green arithmetic circuits 274d-f; and the blue interpolation circuit 272c includes three blue arithmetic circuits 274g-i.
The function of the arithmetic circuits 274a-i can be illustrated by referring to the first red arithmetic circuit 274a. As shown, the first red arithmetic circuit 274a includes a first multiplier 286, a second multiplier 288, a third multiplier 290, a first adder 284, and a second adder 282. The first multiplier multiplies a red component signal with a coefficient to produce a first product signal. The second multiplier 288 multiplies a first stored red component with a coefficient to produce a second product signal. The first adder 284 sums the first product signal and the second product signal to generate a first sum signal. The third multiplier 290 multiplies a second stored red component signal with a coefficient to generate a third product signal. The second adder 282 sums the first sum signal and third product signal to produce an interpolated red pixel signal. The coefficients typically have different values; however, under some circumstances, such as generating output scan lines that are equidistant from the input scan lines, they may have the same value.
The memory 202 provides a means for storing coefficients and provides at least one coefficient to the interpolation circuits 272a-c. In the example shown, coefficients used in non-linear, non-uniform interpolation of color space signals in the red component are passed across a red memory bus 276, while coefficients for color space signals in the green component are passed across a green memory bus 278, and coefficients for color space signals in the blue component are passed across a blue memory bus 280. The control unit 200 generates an address 236 usable by the memory 202 to retrieve at least one coefficient. The control unit 200 generates the address 236 in response to receiving a scan line address 222 corresponding to the adjacent color space signals being interpolated. The control unit 200 is programmable to vary the number of scan lines represented the interpolated pixel signals. This is accomplished by the control unit 200 receiving an instruction 220 and then decoding the instruction to select a different address offset value which is included in the address 236. The address offset essentially points to a different memory space containing another set of coefficients. In one version of the enhanced-video circuit, the memory 202 stores sets of coefficients to generate output frames having 700, 800, 900, 1000, 1200, or 1920 lines per frame.
The control unit 200 can also generate control signals which are passed to the first line buffer 266, second line buffer 268, delay buffer 206, and output buffer 218. Such control signals can be used to coordinate the transfer of data, or they can also be used to initialize or reset the buffers. Additionally, the control unit 200 generates an output sync signal 234 which is used for transferring data across a first output bus 230 or a second output bus 232.
The first line buffer 266, the second line buffer 268, and the delay buffer 206 constitute a buffer for storing color space signals corresponding to three consecutive scan lines. The delay buffer 206 receives a sequence of color space signals representing a scan line. Upon receiving a sequence corresponding to a complete scan line, the delay buffer 206 transfers its contents to the second line buffer 268. At this point, the delay buffer 206 begins storing color space signal of the next scan line and the second line buffer 268 holds the color space signals of the previously completed scan line. Upon receiving the next scan line, the contents of the second line buffer 268 are shifted to the first line buffer 266 and the contents of the delay buffer are shifted into the second line buffer 268. At this point, the first and second line buffer contain the color space signals of two adjacent scan lines. The color space signals stored in the first line buffer 266 and the second line buffer 268 are distributed to the interpolation circuits 272a-c across their respective buffer bus. A first red buffer bus 281 connects the first line buffer 266 to the red interpolation circuit 272a, whereas a second red buffer bus 287 connects the second line buffer 268 to the red interpolation circuit 272a. A first green buffer bus 283 connects the first line buffer 266 to the green interpolation circuit 272b, whereas a second green buffer bus 289 connects the second line buffer 268 to the green interpolation circuit 272b. A first blue buffer bus 285 connects the first line buffer 266 to the blue interpolation circuit 272c, whereas a second blue buffer bus 291 connects the second line buffer 268 to the blue interpolation circuit 272c. In essence, the first line buffer 266, the second line buffer 268, and the delay buffer 206 act as a triple-buffer that stores color space signals of three adjacent scan lines.
The output buffer 218 receives interpolated pixel signals from the interpolation circuits 272a-c and transmits interpolated pixel signals of a current output scan line on the output buses 230-232. Interpolated pixels that are not part of the current output scan line are temporarily stored in the output buffer 218. Each of the output buses 230-232 can concurrently transmit the red, blue, and green interpolated pixel signals the RGB signal. Two output buses are provided to increase the bandwidth of the output. Generally, the output scan lines are transmitted at a higher frequency than the input scan lines. The output buffer 218 may optionally include a means (not shown) for interpolating between adjacent pixels within a scan line to produce a greater number of pixels in the output scan line. Interpolation performed in the output buffer 218 may be either linear or non- linear non-uniform interpolation. In one embodiment of the present invention, the means for interpolating generates horizontally interpolated pixels by simply averaging two adjacent pixels. By interpolating within scan lines, i.e., performing two-dimensional interpolation, the definition of an image represented by the video signal can be further enhanced.
Although the enhanced-video circuit of the present invention is preferably implemented as an integrated circuit, such as an ASIC, it will be understood by one of ordinary skill in the art that the enhanced-video circuit of the present invention may be implemented in either hardware or software, or any combination thereof. FIG. 15 is a flow diagram of a method of using the enhanced-video circuits shown in FIGS. 13-14 to generate a plurality of interpolated pixel signals. In box 300, a plurality of adjacent color space signals is received on the data input bus.
In box 302, at least one coefficient corresponding to the adjacent color space signals is select from the memory 202. This is accomplished when the scan line address 222 corresponding to the adjacent color space signals is received and decoded by the control unit 200 to generate the address 236. The coefficients stored at the address 236 are then retrieved from the memory 202. By decoding the instruction 220, the control unit 200 can produce an address offset which is used to select a different set of coefficients. By selecting a different set of coefficients, the control unit 200 can, in effect, select a different number of scan lines represented by the interpolated pixel signals.
In box 304, non-uniform interpolation is performed between the adjacent color space signals using the retrieved coefficients to generate the plurality of interpolated pixel signals. The enhanced-video circuit shown in FIG. 13 performs linear interpolation which is based on the function given in Equation 1, while the enhanced-video circuit shown in FIG. 14 performs 2nd- order non-linear interpolation which is based on the function given in Equation 4.
It will be realized by one of ordinary skill in the art that the concept of an enhanced-video circuit can be extended to include an arithmetic circuit that performs non-linear interpolation having an order higher than two. For instance, the interpolation circuits 272a-c shown in FIG. 14 could include arithmetic circuits that implement a third-order, fourth-order, or fifth-order interpolation. FIG. 16 illustrates a flow diagram of a method of processing an interlaced video signal to generate a high-resolution video signal. The video systems depicted in FIGS. 4-7 can be employed to perform this method. The method results in generating a high- resolution video signal having a greater number of scan lines than the interlaced video signal.
In box 360, an interlaced video signal having two consecutive fields is received. The interlaced video signal can be formatted according to conventional television transmission standards such as PAL, NTSC, or SECAM. In such a signal, one of the two consecutive fields has even scan lines and the other field has odd scan lines. In most circumstances, the interlaced video signal is a continuous signal which includes a sequence of more than two fields.
In box 362, the interlaced video signal is digitized to produce a digital video signal having a plurality of digitized fields corresponding to the two consecutive fields. Generally, there is a one-to-one correspondence between the fields of the interlaced video signal and the digitized fields. However, there are many applications of this method in which it is desirable to produce two or more digitized fields from a single interlaced field, for instance, when separately manipulating portions of an image represented by the interlaced field.
In box 364, the digitized fields are merged to produce a frame which includes the even scan lines and the odd scan lines. Merging fields typically entails storing a first received field and then combining it with a subsequently received field. However, the method presented herein is not limited to a particular process for merging fields. Next, in box 366, non-uniform interpolation is performed between adjacent scan lines in the frame to generate the high-resolution video signal. The high- resolution video signal has a greater number of vertical scan lines than the interlaced video signal. In box 368, an image represented by the high- resolution video signal is displayed on a monitor, such as the monitor 76 depicted in FIGS. 4-7.
The steps in boxes 360-368 can be repeated to generate a plurality of frames, and thus a corresponding plurality of images. A motion picture, represented by the high-resolution video signal, can be rendered by letting each of the frames correspond to a different consecutive pair of the fields.
FIG. 17 illustrates a flow diagram of a method of processing an interlaced color video signal to generate a high-resolution video signal in accordance with one embodiment of the present invention.
The video systems depicted in FIGS. 4-7 can be employed to perform this method. In addition to the steps shown in FIG. 16, the method of FIG. 17 also includes the step given in box 374. In box 374, the digital video signal is decoded into a plurality of color component signals. The color component signals represent the components of a color space. For example, in the RGB color space, one of the color component signals represents the red space, another represents the green space, and a third represents the blue space.
The remainder of the steps in the method, depicted in boxes 376-380, are performed for each of the color components. Hence, continuing the example of the RGB color space, in box 376, two consecutive red fields are merged to produce a red frame that includes both even and odd scan lines. Likewise, two consecutive green fields are merged to produce a green frame, and two consecutive blue fields are merged to produce a blue frame. In box 378, for each color, non-uniform interpolation is performed between adjacent scan lines in the respective component frame. The resulting interpolated pixels of the color components form a high- resolution video signal which represents the color space. The high-resolution video signal has a greater number of vertical scan lines than the original interlaced video signal. In box 380, a color image, represented by the high-resolution video signal, is displayed on a monitor, such as the monitor 76 depicted in FIGS. 4-7.
The steps in boxes 370-380 can be repeated generate a plurality of frames, and thus a corresponding plurality of images. A color motion picture, represented by the high-resolution video signal, can be rendered by letting each of the frames correspond to a different consecutive pair of the fields.
FIG. 18 illustrates a contextual diagram of a broadcasting system which employs at least one of the video systems depicted in FIGS. 4-7. The broadcasting system includes a broadcasting station 420 and a receiver 424. The broadcasting station 420 includes a transmitter 421 that emits a video signal 422 which travels via the atmosphere to the receiver 424. The transmitter 421 can include a ground based antenna, microwave relay, or satellite. The video signal can include broadcast information formatted according to conventional television transmission standards, such as NTSC, PAL, SECAM, or any variation of these standards. The receiver 424 includes an embodiment of the present invention and may be construed as any means for receiving the video signal 422 and displaying a transmitted image. For example, the receiver 424 could include a color television receiver, a projection screen TV, or a computer.
FIG. 19 illustrates a contextual diagram of a cable broadcasting system which employs at least one of the video systems depicted in FIGS. 4-7. The cable broadcasting system includes a video source 430, a transmission medium 432, such as a coaxial cable, and a receiver 434. The video source 430 includes a transmitter that emits a video signal which travels via the transmission medium 432 to the receiver 434. The video source 430 can include a video cassette player, video camera that plays back images, or a CD ROM. The video signal can include broadcast information formatted according to conventional television transmission standards, such as NTSC, PAL, SECAM, or any variation of these standards. The receiver 434 may be any means, which includes an embodiment of the present invention, for receiving the video signal and displaying a transmitted image. For example, the receiver 434 could include a television, a projection screen TV, or a computer.
Thus, there has been described herein a concept, as well as several embodiments including a preferred embodiment, of a video system which utilizes non-uniform interpolation to generate an improved video image.
Because the various embodiments of the video system, and the method of using same, as herein-described use non¬ uniform interpolation to increase the number of scan lines in a video signal in real-time, they produce a video image of vastly improved quality. Furthermore, because the various embodiments of the video system include a converter that accepts real-time video signals formatted according to conventional NTSC standards, they are capable generating improved NTSC video images that can be displayed on high-resolution computer monitors. In addition, since the various embodiments of the video system include an enhanced-video circuit for performing non-uniform interpolation which is inexpensive and practical to implement using an integrated circuit, they can be incorporated into consumer television receivers.
While specific embodiments of the present invention have been shown and described, it will be apparent to those skilled in the art that the disclosed invention may be modified in numerous ways and may assume many embodiments other than the preferred form specifically set out and described above.
Accordingly, it is intended by the appended claims to cover all modifications of the invention which fall within the true spirit and scope of the invention.
What is claimed is:

Claims

1. A video system, comprising: converter means for converting a video signal to a plurality of color space signals; a memory for storing the plurality of color space signals corresponding to an input frame, the memory providing the color space signals as output; an enhanced-video circuit, operatively coupled to the memory, for performing non-uniform interpolation between adjacent color space signals, the enhanced-video circuit generating a plurality of interpolated pixel signals which represent an output frame having a greater number of vertical scan lines than the input frame; and a sync generator for generating a sync trigger signal and a sampling signal from the video signal, the sync trigger signal being distributed to the memory and the enhanced-video circuit for coordinating the transfer of the plurality of color space signals stored in the memory to the enhanced-video circuit, and the sampling signal synchronizing operations of the converter means and the memory.
30
2. The video system of claim 1, wherein the non¬ uniform interpolation is based on a function:
Yi = clik*dk + c2ik*dk+:L
wherein y represents an interpolated pixel signal, dk and dk+ι represent a pair of adjacent color space signals, Cϋk represents a first coefficient, c2ik represents a second coefficient, and i and k are integer indices.
3. The video system of claim 2, wherein
0 ≤ Ciik <. 1 and 0 <. c2ik < 1, and wherein cιik + c2ik = 1.
4. The video system of claim 1, wherein the non¬ uniform interpolation is based on a function:
Yi = cιik*dk + c2ik*dk+ι + c3ik*d: k+2
wherein yi represents an interpolated pixel signal, dk( dk+ι, and dk+2 represent three successive adjacent color space signals, Cm- represents a first coefficient, c2ik represents a second coefficient, c2ik represents a second coefficient, and i and k are integer indices.
5. The video system of claim 1, wherein the video signal includes a sequence of interlaced fields and the input frame includes two consecutively received fields from the sequence of interlaced fields.
6. The video system of claim 1, wherein the video system is programmable to vary the number of vertical scan lines in the output frame.
7. The video system of claim 1, further comprising: a PLL for generating at least one high-band sync signal from a field sync signal produced by the sync generator, the PLL providing the at least one high-band sync signal to the enhanced-video circuit.
8. A video system, comprising: an A/D converter for converting a video signal to a plurality of binary-coded signals; a signal converter, responsive to the plurality of binary-coded signals, for generating a plurality of chrominance signals and a plurality of luminance signals; a decoder for generating a plurality of color space signals from the pluralities of chrominance and luminance signals; a memory for storing the plurality of color space signals corresponding to an input frame, the memory providing the color space signals as output; an enhanced-video circuit, operatively coupled to the memory, for performing non-uniform interpolation between adjacent color space signals, the enhanced-video circuit generating a plurality of interpolated pixel signals which represent an output frame having a greater number of vertical scan lines than the input frame; a color space converter for converting the interpolated pixel signals to a plurality of output format signals; and a sync generator for generating a sync trigger signal and a sampling signal from the video signal, the sync trigger signal being distributed to the memory and the enhanced-video circuit for coordinating the transfer of the plurality of color space signals stored in the memory to the enhanced-video circuit, and the sampling signal synchronizing operations of the A/D converter, the signal converter, the decoder, and the memory.
9. A color television receiver, comprising: an A/D converter for converting an NTSC signal to a plurality of binary-coded signals; a signal converter, responsive to the plurality of binary-coded signals, for generating a plurality of chrominance signals and a plurality of luminance signals; a decoder for generating a plurality of RGB signals from the pluralities of chrominance and luminance signals; a memory for storing the plurality of RGB signals corresponding to an input frame, the memory providing the RGB signals as output; an enhanced-video circuit, operatively coupled to the memory, for performing non-uniform interpolation between adjacent RGB signals, the enhanced-video circuit generating a plurality of interpolated RGB signals which represent an output frame having a greater number of vertical scan lines than the input frame; and a sync generator for generating a sync trigger signal and a sampling signal from the NTSC signal, the sync trigger signal being distributed to the memory and the enhanced-video circuit for coordinating the transfer of the plurality of RGB signals stored in the memory to the enhanced-video circuit, and the sampling signal synchronizing operations of the A/D converter, the signal converter, the decoder, and the memory.
10. A video system, comprising: a transmitter for broadcasting a video signal to at least one receiver; a receiver for generating an image represented by the video signal, the receiver including: converter means for converting the video signal to a plurality of color space signals; a memory for storing the plurality of color space signals corresponding to an input frame, the memory providing the color space signals as output; an enhanced-video circuit, operatively coupled to the memory, for performing non-uniform interpolation between adjacent color space signals, the enhanced-video circuit generating a plurality of interpolated pixel signals which represent an output frame having a greater number of vertical scan lines than the input frame; a monitor responsive to the interpolated pixel signals, for displaying the image; and a sync generator for generating a sync trigger signal and a sampling signal from the video signal, the sync trigger signal being distributed to the memory and the enhanced-video circuit for coordinating the transfer of the plurality of color space signals stored in the memory to the enhanced-video circuit, and the sampling signal synchronizing operations of the converter means and the memory.
PCT/US1996/009409 1995-06-29 1996-06-07 Non-linear interpolation for color video line conversion WO1997001937A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU60988/96A AU6098896A (en) 1995-06-29 1996-06-07 Non-linear interpolation for color video line conversion

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/496,641 1995-06-29
US08/496,641 US5742350A (en) 1995-06-29 1995-06-29 Video system performing non-uniform interpolation of color space signals and method of using same

Publications (1)

Publication Number Publication Date
WO1997001937A1 true WO1997001937A1 (en) 1997-01-16

Family

ID=23973528

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1996/009409 WO1997001937A1 (en) 1995-06-29 1996-06-07 Non-linear interpolation for color video line conversion

Country Status (6)

Country Link
US (2) US5742350A (en)
KR (1) KR100261638B1 (en)
AU (1) AU6098896A (en)
FR (1) FR2736236A1 (en)
TW (1) TW361034B (en)
WO (1) WO1997001937A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2334845A (en) * 1998-02-28 1999-09-01 Samsung Electronics Co Ltd A scan format converter using look-up tables for converting a video format by bi-sigmoidal interpolation

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4035189B2 (en) * 1995-12-28 2008-01-16 キヤノン株式会社 Imaging device
JP3472667B2 (en) * 1996-08-30 2003-12-02 株式会社日立製作所 Video data processing device and video data display device
US6088062A (en) * 1996-10-29 2000-07-11 Sony Corporation Picture signal processing apparatus
US6437829B1 (en) * 1997-01-16 2002-08-20 Display Laboratories, Inc. Alignment of cathode ray tube displays using a video graphics controller
US6239842B1 (en) 1998-12-18 2001-05-29 Oplus Technologies Ltd. Method of de-interlacing video signals using a mixed mode spatial and temporal approximation technique
US6650704B1 (en) 1999-10-25 2003-11-18 Irvine Sensors Corporation Method of producing a high quality, high resolution image from a sequence of low quality, low resolution images that are undersampled and subject to jitter
EP1269754A4 (en) * 2000-03-14 2009-03-11 Joseph Robert Marchese Digital video system using networked cameras
CA2382133C (en) * 2000-05-10 2010-11-23 Alliance Pharmaceutical Corporation Phospholipid-based powders for drug delivery
US20110013081A1 (en) * 2001-01-11 2011-01-20 Pixelworks, Inc. System and method for detecting a non-video source in video signals
CA2330854A1 (en) * 2001-01-11 2002-07-11 Jaldi Semiconductor Corp. A system and method for detecting a non-video source in video signals
US6844875B2 (en) * 2001-04-03 2005-01-18 The United States Of America As Represented By The Secretary Of The Navy Video converter board
AU2002350949A1 (en) * 2001-06-25 2003-01-08 Redhawk Vision Inc. Video event capture, storage and processing method and apparatus
KR100423455B1 (en) * 2001-10-24 2004-03-18 삼성전자주식회사 Device for processing image signal and method therein
TWI227085B (en) * 2003-11-13 2005-01-21 Realtek Semiconductor Corp Method and apparatus for detecting sawtooth and field motion
US7557861B2 (en) * 2004-01-30 2009-07-07 Broadcom Corporation Reverse pull-down video using corrective techniques
US8861589B2 (en) * 2004-01-30 2014-10-14 Broadcom Corporation Detection and phase lock of pull-down video
US20080253455A1 (en) * 2004-05-06 2008-10-16 Koninklijke Philips Electronics, N.V. High Frame Motion Compensated Color Sequencing System and Method
US7468756B2 (en) * 2004-10-05 2008-12-23 Broadcom Corporation Detection and phase lock of 2:2 and 3:2 pull-down video
US7468757B2 (en) * 2004-10-05 2008-12-23 Broadcom Corporation Detection and correction of irregularities while performing inverse telecine deinterlacing of video
EP2242247A3 (en) * 2004-11-01 2012-07-25 Technicolor, Inc. Method and system for mastering and distributing enhanced color space content
JP2009521840A (en) * 2005-12-21 2009-06-04 トムソン ライセンシング Limited color palette in color space
TWI317599B (en) * 2006-02-17 2009-11-21 Novatek Microelectronics Corp Method and apparatus for video mode judgement
US9166883B2 (en) 2006-04-05 2015-10-20 Joseph Robert Marchese Network device detection, identification, and management
TWI325273B (en) * 2006-08-16 2010-05-21 Realtek Semiconductor Corp Method and apparatus for detecting sawtooth and field motion
JP4427592B2 (en) * 2008-08-04 2010-03-10 株式会社東芝 Image processing apparatus and image processing method
US9491398B1 (en) 2010-12-21 2016-11-08 Pixelworks, Inc. System and method for processing assorted video signals

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5243433A (en) * 1992-01-06 1993-09-07 Eastman Kodak Company Digital image interpolation system for zoom and pan effects
US5274447A (en) * 1991-03-28 1993-12-28 Matsushita Electric Industrial Co., Ltd. Apparatus for converting a field frequency and a scanning line number of a television signal

Family Cites Families (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5218430A (en) * 1986-04-26 1993-06-08 Erno Gmbh Color video transmission
DE3875983T2 (en) * 1987-03-04 1993-04-15 Hitachi Ltd DEVICE FOR PLAYING VIDEO SIGNALS LOW-RESOLUTION ON VIDEO MONITORS HIGH-RESOLUTION.
US4870481A (en) * 1987-12-01 1989-09-26 Ikegami Tsushiniki Co., Ltd. Color television signal transmission system and improved-definition receiver for use in the system
US5294984A (en) * 1988-07-23 1994-03-15 Ryoichi Mori Video signal processing system for producing intermediate pixel data from neighboring pixel data to improve image quality
FR2636488A1 (en) * 1988-09-09 1990-03-16 Labo Electronique Physique TELEVISION STANDARD CONVERTER DEVICE
US4876596A (en) * 1988-10-25 1989-10-24 Faroudja Y C Film-to-video converter with scan line doubling
US4967271A (en) * 1989-04-05 1990-10-30 Ives C. Faroudja Television scan line doubler including temporal median filter
US4989090A (en) * 1989-04-05 1991-01-29 Yves C. Faroudja Television scan line doubler including temporal median filter
US4982280A (en) * 1989-07-18 1991-01-01 Yves C. Faroudja Motion sequence pattern detector for video
US5014119A (en) * 1989-08-25 1991-05-07 Faroudja Y C Horizontal and vertical transition level enhancement within television system
US5040062A (en) * 1990-03-19 1991-08-13 At&T Bell Laboratories Television signal arrangement where selected signals are encoded digitally
US5124688A (en) * 1990-05-07 1992-06-23 Mass Microsystems Method and apparatus for converting digital YUV video signals to RGB video signals
US5291275A (en) * 1990-06-20 1994-03-01 International Business Machines Incorporated Triple field buffer for television image storage and visualization on raster graphics display
US5233684A (en) * 1990-06-26 1993-08-03 Digital Equipment Corporation Method and apparatus for mapping a digital color image from a first color space to a second color space
US5049993A (en) * 1990-10-03 1991-09-17 Bell Communications Research, Inc. Format conversion preprocessing method and circuit
KR930011844B1 (en) * 1991-01-22 1993-12-21 삼성전자 주식회사 Scanning line converting circuit
GB2252468B (en) * 1991-02-04 1994-10-19 Sony Broadcast & Communication Television standards converters
US5159451A (en) * 1991-03-19 1992-10-27 Faroudja Y C Field memory expansible line doubler for television receiver
US5151783A (en) * 1991-06-05 1992-09-29 Faroudja Y C Digital television with enhancement
US5537638A (en) * 1991-10-25 1996-07-16 Hitachi, Ltd. Method and system for image mapping
US5414469A (en) * 1991-10-31 1995-05-09 International Business Machines Corporation Motion video compression system with multiresolution features
KR940006935B1 (en) * 1991-12-23 1994-07-29 주식회사 금성사 Line doubler device of tv signal
KR930015760A (en) * 1991-12-31 1993-07-24 강진구 TV mode automatic inverter
GB2264415B (en) * 1992-02-13 1995-09-20 Sony Broadcast & Communication Motion compensation for colour video signals
US5428398A (en) * 1992-04-10 1995-06-27 Faroudja; Yves C. Method and apparatus for producing from a standard-bandwidth television signal a signal which when reproduced provides a high-definition-like video image relatively free of artifacts
US5291280A (en) * 1992-05-05 1994-03-01 Faroudja Y C Motion detection between even and odd fields within 2:1 interlaced television standard
JP3332093B2 (en) * 1992-09-04 2002-10-07 株式会社東芝 Television signal processor
KR0128245B1 (en) * 1992-10-07 1998-04-02 배순훈 High definition television with divicding pictures e
KR950005647B1 (en) * 1992-10-29 1995-05-27 주식회사금성사 Shared receiving system of ntsc signal and hdtv signal
AU5691294A (en) * 1992-12-18 1994-07-19 Maher A. Sid-Ahmed Real-time television image pixel multiplication methods and apparatus
US5428397A (en) * 1993-05-07 1995-06-27 Goldstar Co., Ltd. Video format conversion apparatus for converting interlaced video format into progressive video format using motion-compensation
US5329365A (en) * 1993-07-07 1994-07-12 Rca Thomson Licensing Corporation Method and apparatus for providing compressed non-interlaced scanned video signal
US5398071A (en) * 1993-11-02 1995-03-14 Texas Instruments Incorporated Film-to-video format detection for digital television
US5625421A (en) * 1994-01-14 1997-04-29 Yves C. Faroudja Suppression of sawtooth artifacts in an interlace-to-progressive converted signal
JPH0877341A (en) * 1994-08-29 1996-03-22 Xerox Corp Equipment and method for color image processing
KR960028124A (en) * 1994-12-30 1996-07-22 이몬 제이. 월 Method and apparatus for identifying video fields generated by film sources
US5508750A (en) * 1995-02-03 1996-04-16 Texas Instruments Incorporated Encoding data converted from film format for progressive display
US5606373A (en) * 1995-04-04 1997-02-25 International Business Machines Corporation Methods for repeated field detection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5274447A (en) * 1991-03-28 1993-12-28 Matsushita Electric Industrial Co., Ltd. Apparatus for converting a field frequency and a scanning line number of a television signal
US5243433A (en) * 1992-01-06 1993-09-07 Eastman Kodak Company Digital image interpolation system for zoom and pan effects

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2334845A (en) * 1998-02-28 1999-09-01 Samsung Electronics Co Ltd A scan format converter using look-up tables for converting a video format by bi-sigmoidal interpolation
GB2334845B (en) * 1998-02-28 2000-04-26 Samsung Electronics Co Ltd Method for making look-up tables for video format conversion and scan format converter using the look-up tables

Also Published As

Publication number Publication date
FR2736236A1 (en) 1997-01-03
US5742350A (en) 1998-04-21
KR19990028351A (en) 1999-04-15
AU6098896A (en) 1997-01-30
US5861924A (en) 1999-01-19
KR100261638B1 (en) 2000-07-15
TW361034B (en) 1999-06-11

Similar Documents

Publication Publication Date Title
US5742350A (en) Video system performing non-uniform interpolation of color space signals and method of using same
KR100426889B1 (en) A self-moving image scanning format converter with seamless switching
EP0782333B1 (en) Image display apparatus
US6144412A (en) Method and circuit for signal processing of format conversion of picture signal
US6489997B1 (en) Versatile video transformation device
US6927801B2 (en) Video signal processing apparatus and video displaying apparatus
JPH02237280A (en) Standard/high definition television receiver
US6208382B1 (en) Color video processing system and method
KR100769244B1 (en) Method of converting interlaced video signals to progressive video signals, method and system of converting interlaced mpeg video signals to progressive video signals
KR920010043B1 (en) Normal tv and hd-tv scene signal selection apparatus and method
US5784116A (en) Method of generating high-resolution video
JPS5877373A (en) Television signal processing circuit
US5717466A (en) Circuit for interpolating scan lines of a video signal and method of using same
Seth-Smith et al. Flexible upconversion for high quality TV and multimedia displays
JPH04167685A (en) Television receiver
US5029002A (en) High definition television system
KR100311009B1 (en) Apparatus and method for converting video format using common format
Poynton High definition television and desktop computing
KR920010940B1 (en) Television display device and the same method in pip
JPH0666945B2 (en) High definition television receiver
JPH03179890A (en) Television receiver
JPH0362686A (en) Television receiver
JPH11103448A (en) Scanning line converting device
JPH05328315A (en) System converting device for video signal
JPH0362687A (en) Television receiver

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 96195132.X

Country of ref document: CN

AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BB BG BR BY CA CH CN CZ DE DK EE ES FI GB GE HU IL IS JP KE KG KP KR KZ LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG US UZ VN AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 1019970709666

Country of ref document: KR

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA

WWP Wipo information: published in national office

Ref document number: 1019970709666

Country of ref document: KR

WWG Wipo information: grant in national office

Ref document number: 1019970709666

Country of ref document: KR