|Publication number||USRE39898 E1|
|Application number||US 09/374,041|
|Publication date||Oct 30, 2007|
|Filing date||Aug 13, 1999|
|Priority date||Jan 23, 1995|
|Also published as||US5598525|
|Publication number||09374041, 374041, US RE39898 E1, US RE39898E1, US-E1-RE39898, USRE39898 E1, USRE39898E1|
|Inventors||Robert M. Nally, John C. Schafer|
|Original Assignee||Nvidia International, Inc.|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (93), Non-Patent Citations (45), Referenced by (14), Classifications (17), Legal Events (3)|
|External Links: USPTO, USPTO Assignment, Espacenet|
The questions raised in reexamination request No. 90/005,471, filed Aug. 13, 1999, have been considered and the results thereof are reflected in this reissue patent which constitutes the reexamination certificate required by 35 U.S.C. 307 as provided in 37 CFR 1.570(e).
The present invention relates in general to multimedia processing and display systems and in particular to apparatus, systems and methods for controlling graphics and video data overlay in multimedia processing and display systems.
The following copending and coassigned United States patent application contain related information and are incorporated herein by reference:
U.S. patent application Ser. No. 08/098,846 (Attorney's Docket No. P3510-P11US), entitled “System And Method For The Mixing Of Graphics And Video Signals,” and filed Jul. 29, 1993; and
U.S. patent application Ser. No. 08/223,845 now U.S. Pat. No. 5,506,604 (Attorney's Docket No. P3510-P21US), entitled “Apparatus, Systems And Methods For Processing Video Data In Conjunction With A Multi-Format Frame Buffer,” and filed Apr. 6, 1994.
As multimedia information processing systems increase in popularity, system designers must consider new techniques for controlling the processing and display of data simultaneously generated by multiple sources. In particular, there has been substantial demand for processing systems which have the capability of concurrently displaying both video and graphics data on a single display screen. The development of such systems presents a number of design challenges, not only because the format differences between graphics and video data must be accounted for, but also because of end user driven requirements that these systems allow for flexible manipulation of the data on the display screen.
One particular technique for simultaneously displaying video and graphics data on a single display screen involves the generation of “windows.” In this case, a stream of data from a selected source is used to generate a display within a particular region or “window” of the display screen to the exclusion of any non-selected data streams defining a display or part of a display corresponding to the same region of the screen. The selected data stream generating the display window “overlays” or “occludes” the data from the nonselected data streams which lie “behind” the displayed data. In one instance, the overall content and appearance of the display screen is defined by graphics data and one or more “video windows” generated by data from a video source occlude a corresponding region of that graphics data. In other instances, a video display or window may be occluded or overlaid by graphics data or even another video window.
In the multimedia environment, the “windowing” described above yields substantial advantages. Among other things, the user can typically change the size and location on the display screen of a given window to flexibly manipulate the content and appearance of the data being displayed. For example, in the case of combined graphics and video, the user can advantageously create custom composite visual displays by combining multiple video and graphics data streams in windowing environment.
In order to efficiently control windows in a multimedia environment efficient frame buffer management is required. Specifically, a frame buffer control scheme must be developed which allows for the efficient storage and retrieval of multiple types of data, such as video data and graphics data. To be cost competitive as well as functionally efficient, such a scheme should minimize the number of memory devices and the amount of control circuitry required and should insure that data flow to the display is subjected to minimal delay notwithstanding data type.
One of the major difficulties in managing video in a combined video and graphics windowing environment results from the fact that the video data being received and displayed are constantly being updated, typically at a rate of thirty frames per second. In contrast, the graphics data are normally generated once to define the graphics display and then remain static until the system CPU change that graphics display. Thus, the occlusion (overlay) of video data with graphics data requires that the static graphics data “in front of” the video data not be destroyed each time the video window is updated. A second concern with windowing systems operating on both video and graphics data is the formatting differences between the video and graphics data themselves since video is typically digitized into a YUV color space while graphics is digitized into an RGB color space. Hence, any combination video and graphics windowing system must have the capability of efficiently handling data within both the YUV and RGB formats.
Thus, due to the advantages of windowing, the need has arisen for efficient and cost effective windowing control circuitry. Such windowing circuitry should allow for the simultaneous processing of data received from multiple sources and in multiple formats. In particular, such windowing control circuitry should be capable of efficiently and inexpensive controlling the occlusion and/or overlay of video and graphics data in a windowing environment.
The principles of the present invention is general provide for the flexible control of graphics and video data in a display control environment. In particular, an entire frame of video data, graphics data, or a combination of both, may be stored in on-screen memory and rastered out with the generation of the corresponding display screen. A window of graphics or video data can then be stored in off-screen memory and retrieved when the raster scan generating the display reaches the desired position on the display for the video window. The window of data from off-screen memory can then be overlayed over the data being rastered out of the on-screen memory under one of three conditions. In a first mode, pixels from the off-screen memory are rastered only when the raster scan has reached the position on the display selected for the window. In a second mode, a window of data is rastered from the off-screen memory when the display raster scan has reached the display window position and graphics data being rastered from the on-screen memory matches a color key. In a third mode, the window data is rastered out of the off-screen memory when the data being output from the on-screen memory matches the color key, notwithstanding the position of the raster scan.
According to a first embodiment of the present invention, a graphics and video controller is provided which includes a dual aperture interface, each word associated with an address to a selected one of on-screen and off-screen areas of an associated unified frame buffer as either graphics or video pixel data. Circuitry is provided for writing a word of the pixel data received by the interface to a one of the on-screen and off-screen memory areas corresponding to the address associated with the received word. Circuitry is also included for selectively retrieving graphics and video data from the on-screen and off-screen memory areas. A first pipeline is provided for processing graphics data retrieved from the frame buffer and a second pipeline is provided for video processing data retrieved from the frame buffer.
According to a second embodiment of the present invention, a controller is provided which includes a dual aperture port for receiving video and graphics data, each word of the data received with an address associated directing the word to be processed as either graphics or video data and off-screen memory spaces of a frame buffer. A second port is included for receiving real-time video data. Circuitry is provided for generating an address associated with a selected one of the memory spaces for each word of received real-time video data. Circuitry is included for writing selectively the words into the on-screen and off-screen memory spaces of the frame buffer. Circuitry is also provided for selectively retrieving the words of data from the on-screen and off-screen spaces as data is rastered for driving a display. A graphics backend pipeline processes ones of the graphics words of data retrieved from the frame buffer. A video backend pipeline is provided for processing ones of the video words of data retrieved from the frame buffer, the circuitry for retrieving always rastering a stream of graphics data from the frame buffer to the graphics pipeline and rastering video data to the video backend pipeline when a display raster scan reaches a display position of a video window. An output selector is included for selecting for output between words of data output from the graphics backend pipeline and words of data output from the video backend pipeline.
According to a third embodiment of the present invention, a display system is provided which includes first and second parallel backend pipelines. A multi-format frame buffer memory is included having on-screen and off-screen memories each operable to simultaneously store data in graphics and video formats. A dual aperture port is provided for receiving both graphics and video data as directed by an address associated with each word of data received. Circuitry for writing is included for writing a word of video or graphics data into a selected one of the on-screen and off-screen areas of the multi-frame buffer. Memory control circuitry controls the transfer of data between the first and second backend pipelines and the frame buffer. The system further includes a display unit and overlay control circuitry for selecting for output to the display unit between data provided by the first backend pipeline and data provided by the second backend pipeline.
A fourth embodiment of the present invention comprises a display data processing system which includes circuitry for writing data into an on-screen space of a frame buffer and circuitry for writing data into an off-screen space of the frame buffer. A video pipeline is provided for processing video data output from a selected one of the on-screen and off-screen spaces. The video pipeline includes a first first-in/first-out memory for receiving selected pixel data from the selected space. The video pipeline also includes a second first-in/first-out memory disposed in parallel to the first first-in/first-out memory for receiving other selected data from the selected space in the frame buffer. An interpolator is provided as part of the video pipeline for generating additional data by interpolating data output from the first and second first-in/first-out memories. A graphics pipeline is disposed in parallel to the video pipeline for processing graphics data output from a selected one of the on-screen and off-screen spaces. Finally, an output selector is provided for selecting between data output from the video pipeline and data output from the graphics pipeline.
The principles of the present invention allow for the construction of circuits and systems with substantial advantages over the prior art. Among other things, the principles of the present invention allow both graphics and video data to be stored in a single unified frame buffer and retrieved therefrom in a number of different ways. For example, a combination of graphics and video data may be stored in the on-screen memory and simply rastered out during screen refresh. In another case, an entire screen of graphics or video data may be stored in the on-screen memory while a window of graphics or video data is stored in the off-screen portion of memory. The window data can then be rastered out to selectively overlay a portion of the data being rastered out of the on-screen memory. The overlay may be controlled by either window display position with a match of the on-screen data being rastered out and a color key, or both.
The embodiments of the present invention provide for the efficient and inexpensive overlay of video and graphics data in a windowing environment. In particular, the use of color comparison to determine the overlay of data in a window region eliminates the need for precise x- and y-position data for the location of that window and allows for video cropping to be performed. Further, the use of graphics data to control overlay provides substantial advantages in that graphics data is less subject to the graininess and noise problems often found with video data. Further, the user is given total control of overlay operations when keying on graphics data because the graphics data is computer generated, whereas the video data is captured data.
For a more complete understanding of the present invention, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
A VGA controller 105 embodying the principles of the present invention is also coupled to local bus 103. VGA controller 105 will be discussed in detail below; however, VGA controller 105 generally interfaces CPU 101 and video source 104 with a display unit 106 and a multiformat system frame buffer 107. Frame buffer memory 107 provides temporary storage of the graphics and video data during processing prior to display on display unit 106. According to the principles of the present invention, VGA controller is operable in selected modes to store graphics and video data together in frame buffer 107 in their native formats. In a preferred embodiment, the frame buffer area is partitioned into on-screen memory and off-screen memory. Frame buffer 107 is also a “unified” memory in which video or graphics data can be stored in either the on-screen or off-screen areas. In the preferred embodiment, display unit 106 is a conventional raster scan display device and frame buffer 107 is constructed from dynamic random access memory devices (DRAMs).
In the preferred embodiment of system 100, CPU 101 can write video data and/or read and write graphics data to frame buffer 107 via CPU interface 206. In particular, CPU 101 can direct each pixel to the frame buffer using one of two maps depending on whether that pixel is a video pixel or a graphics pixel. In the preferred embodiment, each word of pixel data (“pixel”) is associated with one of two addresses, one which directs interpolation of the pixel as a video pixel through video front-end pipeline 200 and the other which directs interpolation of the pixel as a graphics pixel through write buffer 207 and graphics controller 208. As a consequence, either video or graphics pixel data can then be input to CPU interface 206 from the PCI/VI bus through a single “dual aperture” port as a function of the selected address held in an address buffer.
Data which is input through the video port 211 is address-free. In this case, video window controls 213 generates the required addresses to either the on-screen memory area or the off-screen memory as a function of display location for the video window. In the preferred embodiment, window controls 213 generate addresses using the same video control registers 203 used to control retrieval of the video in the backend pipeline (i.e., the screen x and y position registers 500 and 501 discussed below in conjunction with FIG. 5). When data is being received through both the CPU interface 206 and the VPORT 211 simultaneously, the data is interleaved into memory with the two write buffers 207 and 217 buffering the data such that neither stream is interrupted or forced into a wait state at the source component (i.e., bus 103 or video source 104).
It should be noted at this point that frame buffer 107 includes at least two different data areas or spaces to which data can be directed by the given address (either CPU 103 or controls 213 generated.) Each space can simultaneously store graphics or video data depending on the selected display configuration. The on-screen area corresponds to the display screen; each pixel rastered out of a given pixel location in the on-screen area defines a corresponding screen pixel. The off-screen area is used to store data defining a window for selectively overlaying the data from the on-screen memory, fonts and other data necessary by controller 105. Further, as will be discussed below, both graphics and video data may be rastered from frame buffer 107 and passed through video backend pipeline 104 while only graphics data is ever passed through graphics backend pipeline 205.
According to the principles of the present invention, there are alternate ways of storing and retrieving graphics and video data from unified frame buffer 107.
For example, CPU 103 may write a static graphics background into part of the on-screen memory with the remaining “window” in the on-screen memory area filled with playback video data. “Playback” video data can be either (1) live video data input from the VPORT; (2) YUV (video) data written through interface 206 by CPU 103; or (3) true color (5:5:5, 5:6:5, or 8:8:8) RGB graphics data (for example animation graphics data) written in through either the VPORT or interface 206. Similarly, a playback video background and a window of graphics data may be written into the on-screen area. In each of these cases, the data is rastered out as the display is without overlay; the video playback data is passed through the video backend pipeline 204 as a function of display position by controls 202 and the graphics data passed through the graphics backend pipeline 250.
Windows of data retrieved from the off-screen memory can be retrieved and used to occlude a portion of the data being rastered out of the on-screen memory. For example, a window of playback data can be stored in the off-screen memory and a frame of static graphics data (either true color data or indices to CLUT 234) stored in the on-screen memory. In this case, the static graphics are rastered out of the on-screen memory without interruption and passed through the graphics backend pipeline 205. The window of data in the off-screen memory is rastered out only when the display position for the window has been reached by the display raster and is passed through video backend pipeline 204. As discussed below, data from the video backend pipeline 204 can then be used to selectively occlude (overlay) the data being output from the graphics backend pipeline 205. A window of static graphics data (true color or indices to the CLUT 234) can be stored in off-screen memory and used to overlay playback video from the on-screen memory. The playback video data is passed through the video backend pipeline 204 and the window of static graphics data is passed through the graphics backend pipeline 205.
Bit block transfer (BitBLT) circuitry 209 is provided to allows blocks of graphics data within frame buffer 107 to be transferred, such as when a window of graphics data is moved on the display screen by a mouse. Digital-to-analog converter (DAC) circuitry 210 provides the requisite analog signals for driving display 106 in response to the receipt of either video data from video backend pipeline 204 or graphics data from backend pipeline 208.
In implementing the operations discussed above, video front-end pipeline 200 can receive data from two mutually exclusive input paths. First, in the “playback mode,” playback (non-real time) data may be received via the PCI bus through CPU interface 206. Second, in the “overlay emulation mode” either real-time or playback video may be received through the video port interface 211 (in system 100 video port interface 211 is coupled to bus 109 when real-time data is being received). The selection of video from the PCI bus or video from video port interface 211 is controlled by a multiplexer 212 under the control of bits stored in a video front-end pipeline control register within video control registers 203. In the playback mode, either CPU 101 or a PCI bus master controlling the PCI bus provides the frame buffer addresses allowing video front-end pipeline 200 to map data into the frame buffer separate and apart from the graphics data. In the overlay emulation mode, overlay input window controls 213 receives framing signals such as VSYNC and HSYNC, tracks there sync signals with counters to determine the start of each new frame and each new line, generates the required addresses for the real-time video to the frame buffer space using video window position data received from window controls 222 (as discussed above, in the preferred embodiment, video data is always retrieved from either the on-screen on off-screen memory and passed through video back-end pipeline 204 as a function of display position) and thus the position data from controls 222 is used to both write data to memory and retrieve data therefrom. In general, overlay input video control windows are controlled by the same registers which control the backend video pipeline 204, although the requisite counters and comparators are located internal to overlay input video control circuitry 213.
Video front-end pipeline 200 also includes encoding circuitry 214 that is operable to truncate 16-bit YUV 422 data into a 8-bit format and then pack four such 8-bit encoded words into a single 32-bit word which is then written into the video frame buffer space of frame buffer 105. Conversion circuitry 215 is operable to convert RGB 555 data received from either the CPU interface 206 and the PCI bus or VPORT I/F 211 into VCrCb (YUV) data prior to encoding by encoding circuitry 214. Conversion circuitry 215 allows graphics data (for example in a 5:5:5 or 5:6:5 format) to be introduced through the VPORT or graphics data to be converted, packed and stored in a YUV format in the off-screen memory space by CPU 101. For a more complete description of encoder 214 and the associated decoder 225 of video pipeline 204, reference is now made to incorporated copending coassigned application Ser. No. 08/223,845. The selection and control of the encoding circuitry 214 and conversion circuitry 215 is implemented through multiplexing circuitries 212 and 216, each of which are controlled by bits in the video control registers. Finally, video front-end pipeline 200 includes a write buffer/FIFO 217 which in one embodiment acts as a write buffer and in an alternate embodiment acts as a FIFO for the video backend pipeline 204. In embodiments where buffer 217 acts as a write buffer for then Y, zooming on the backend, as discussed below is by replication. In embodiments where buffer 217 operates as a FIFO, then the VPORT and front and end color conversion by converter circuitry 215 are not used for writing data to frame buffer 107.
Memory control circuitry 201 includes an arbiter 218 and a memory interface 219. Arbiter 218 prioritizes and sequences requests for access to frame buffer 107 received from video front-end pipeline 200, graphics controller 208 and bit block transfer circuitry 209. Arbiter 218 further sequences each of these requests with the refresh of the display screen of display 106 under the control of CRT controller 202. Memory interface 219 controls the exchange of addresses, data, and control signals (such as RAS, CAS and read/write enable) to and from frame buffer 107.
CRT control/video window control circuitry 202 includes the CRT controller 220, window arbiter 221, and video display window controls 222. CRT controller 202 controls the refresh of the screen of display 106 and in particular the rastering of data from frame buffer 107 to display unit 107 through DAC 210. In the preferred embodiment, CRT controller 220, through arbiter 218 and memory interface 219, maintains a constant stream of graphics data into graphics backend pipeline 205 from memory; video or playback graphics data is rastered out only when a window has been reached by the display raster as determined by display position controls of window controls 222 (see
Video backend pipeline 204 receives a window of graphics video data defining a display window from the on-screen or off-screen spaces in frame buffer 107 through a pair of first-in/first-out memories 223 and 217 (in embodiments where buffer 217 is acting as FIFO B). In the preferred embodiment, each FIFO receives the data for every other display line of data being generated for display on the display screen. For example, for a pair of adjacent lines n−1 and n+1 in memory (although not necessarily adjacent on the display) for the display window, FIFO 223 receives the data defining window display line n−1 while FIFO 224 receives the data defining window display line n+1. When buffer 217 is used as FIFO B, writes through video front end pipeline 200 are made through write buffer I 207 and multiplexer 235. Alternatively, if buffer 217 is used as write buffer II, then FIFO B is not implemented and only a single stream is processed by video backend pipeline 204 (no Y interpolation is performed and Y expansion is by replication). As will be discussed further below (assuming both FIFO A and FIFO B are being used), one or more display lines, which falls between line n−1 and line n+1, may be selectively generated by interpolation. Decoder circuitry 225 receives two 32-bit packed words (as encoded by encoder 214), one from each adjacent scan line in memory, from FIFOs 223 and 217. Each 32-bit word, which represents four YCrCb pixels, is expanded and error diffused by decoder 225 into four 16-bit YCrCb pixels. In modes where video data is stored in the frame buffer in standard 555 RGB or 16 YCrCb data formats, decoder block 225 is bypassed.
Backend video pipeline 204 further includes a Y interpolator 226 and X interpolator 227. In the preferred embodiment, during Y zooming (expansion) Y interpolator 226 accepts two vertically adjacent 16-bit RGB or YCrCb pixels from the decoder 225 and calculates one or more resampled output pixels using a four subpixel granularity. X interpolator 227 during X zooming (expansion) accepts horizontally adjacent pixels from the Y interpolator 226 and calculates one or more resampled output pixels using a four subpixel granularity. For data expansion using line replication, Y interpolator 226 is bypassed. Y interpolator 226 and X interpolator 227 allow for the resizing of a video display window being generated from one to four times.
The output of X interpolator 227 is passed to a color converter 228 which converts the YCrCb data into RGB data for delivery to output multiplexer 304. To reiterate, if graphics data is passed through the video pipeline, converter 228 is not used.
Backend video circuitry 204 further includes pipeline control circuitry 229, overlay control circuitry 230 and output multiplexer 231. Pipeline control circuitry 239 controls the reading of data from video FIFOs 223 and 217, controls the generation of interpolation coefficients for use by X and Y interpolators 226 and 227 to resize the video window being pipelined, and times the transfer of data through the pipeline. Overlay control circuitry 230 along with control circuitry 202, controls the output of data through output multiplexer 231, including the overlay of the video window over the graphics data output through the graphics backend pipeline 205. A pixel doubler is provided to double the number of pixels being generated such that a 1280×1024 display can be driven.
Graphics backend pipeline 205 includes a first-in/first-out memory 232, attribute controller 233, and color look-up table 234. Each 32-bit word output from graphics FIFO 232 is serialized into either 8-bit, 16-bit or 24-bit words. The 8-bit words, typically composed of an ASCII code and an attribute code, are sent to attribute controller 233. When 16-bit and 24-bit words, which are typically color data, are serialized, those words are sent directly to overlay controls 230. Attribute controller 233 performs such tasks as blinking and underlining operations in text modes. The eight bits output from attribute controller 233 are pseudo-color pixels used to index CLUT 234. CLUT 234 preferably outputs 24-bit words of pixel data to output multiplexer 231 with each index. When video data is being pipelined through graphics backend pipeline 205 from the on-screen memory, CLUT 234 is bypassed.
The eight bit pseudo-color pixels output from attribute controller 233 are also sent to overlay controls 230. In the preferred embodiment, data is continuously pipelined from on-screen memory through graphics backend pipeline 205 to the inputs of output multiplexer 231. Window data from off-screen memory however is only retrieved from memory and pipelined through video backend pipeline 204 when a window is being displayed. In other words, when a window has been reached, as determined by control bits set by CPU 101 in VW control registers 222, video window display controls 222 generate addresses to retrieve the corresponding data from the off-screen memory space of frame buffer 107. Preferably, video FIFOs 223 and 224 are filled before the raster scan actually reaches the display window such that the initial pixel data is available immediately once the window has been reached. In order to insure that graphics memory data continues to be provided to graphics backend pipeline 205, video window display controls 222 “steal” page cycles between page accesses to the graphics memory. It should be noted that once the window has been reached the frequency of cycles used to retrieve window data increases over the number used to fill the video FIFOs when outside a window. When the frequency of window page accesses increases, video window display controls 222/arbiter 221 preferably “steal” cycles from page cycles being used to write data into the frame buffer.
The graphics pseudo-pixels output from attribute controller 233 and the 16-bit or 24-bit graphics or video data output directly from serializer 236 are provided to the inputs of color comparison circuitry 302. Also input to color comparison circuitry 302 are 16 or 24-bit overlay color key bits stored in overlay color key register 303. Overlay color key register 303 resides within the address space of, and is loaded by, CPU 101. Depending on the mode, color comparison circuitry 302 compares selected bits from the overlay color key register 303 with either the 8 bits indexing look-up table 234 in the color look-up table mode (pseudo-color mode) or the 16-bits (24-bits in the alternate embodiment) passed directly from serializer 236. It should be noted that in the illustrated embodiment, overlay color key register 303 holds 24 overlay color key bits, eight each for red, green, and blue index comparisons. The specific overlay color key bits compared with the input graphics data are provided in Table I:
OVERLAY COLOR KEY BITS COMPARED
As shown in
The output of color comparison circuitry 303 is passed to the “K” control input of overlay control multiplexer 304. The “P” control input to multiplexer 304 is provided from pixel position comparison circuitry 305. The data inputs to multiplexer 304 are coupled to an 8-bit overlay OP Code (OOC) register 306. The output of multiplexer 304 is used as one control input to output multiplexer 304, which along with a single bit set by CPU 101 into output control register 307, selects which of the data received at the data inputs of multiplexer 231 will be output to DAC 210.
Pixel position comparison circuitry 305 includes three inputs coupled respectively to video window 1 position control circuitry 308, CRT position control circuitry 309 and video window 2 position control circuitry 310. In the illustrated embodiment, CRT position controller 309 is located within CRT controller 220 while video window 1 position control circuitry and video window 2 position control circuitry 310 are located within video display window controls 222 (FIG. 2). CRT position control circuitry 309 includes counters which track the position of the current pixel being generated for display. In the preferred embodiment, CRT position control circuitry 309 includes at least an x-position counter which tracks the generation of each pixel along a given display line and a y-position counter which tracks the generation of each display line in a screen. The x-position counter may for example count pixels by counting each VCLK period between horizontal synchronization signal (HSYNC) controlling display unit 106. The y-position counter may for example count each HSYNC signal occurring between each vertical synchronization signal (VSYNC) controlling the screen generation on display unit 106.
In a second mode, 16 bits are received from serializer 236. The eight LSBs are passed through multiplexer 405 to comparator 406 and the eight MSBs passed to comparator 407. Control signal
A 4-bit OP Code loaded by CPU 101 into overlay OP Code register 306 in conjunction with the control signals applied to the “P” and “K” control inputs to multiplexer 304 control the presentation of an active (assumed high in the illustrated embodiment) control signal to the “B” control input to output multiplexer 231. The other (“A”) input to output multiplexer 231 receives a bit from overlay mode register 402 (FIG. 4), as loaded by CPU 101. In the illustrated embodiment, the selection between the streams from the graphics and video backends at the 0,1,2 inputs to output multiplexer 304 in response to the signals presented at the corresponding control inputs “A” and “B” is in accordance with Table II:
Control Control Selected Input A Input B Stream 0 0 Graphics or video pixels from graphics pipeline 205 1 0 Graphics pixels from CLUT 234 at input 1 0 1 Video or 1 1 graphics from video backend 204
The OP Codes used in the illustrated embodiment, the effective overlay and the corresponding inputs to the control inputs of multiplexer 304 are listed in Table III (active state is assumed):
Overlay Op Code
Register 309 Value
Input P active
Paint pixels from
pipeline 204 only
Inputs K and P
Paint from video
204 window VDW.sub.n
when color key
Inputs K active
Paint from video
204 if color key
In the illustrated embodiment, if a Oh is written into OOC register 306 by CPU 101, only pixels from graphics pipeline 205 are pipelined through multiplexer 304. In this case any signals applied to the P and K control inputs to multiplexer 304 have no effect (i.e., will not result in a high output from multiplexer 304). In the illustrated embodiment, if an Ah is written into OOC register 306, pixels from video pipeline 204 will be passed to DAC 210 only when pixel position comparison circuitry 305 determines that the raster scan has reached a pixel in the window and hence the control signal going to the P input of multiplexer 304 has been activated. If on the other hand, an 8h is written into OOC register 306, data is passed through output multiplexer 231 to DAC 210 when pixel position comparison circuitry 305 determines that the raster scan has reached a pixel on the display screen within the window and color compare circuitry 302 has determined that the incoming data from graphics pipeline 205 matches the overlay color key held in overlay color key register 303. In this case, the data from video pipeline 204 is passed to DAC 210 when both the P and the K inputs to multiplexer 304 are active. Finally, when an OpCode of C is programmed into OOC register 306, data from video pipeline 204 is passed if the incoming data from graphics pipeline 205 matches the overlay color key held in overlay color key register 303. In this case, the activation of the K control input activate the output of multiplexer 304 to switch the input of multiplexer 231 to pass the corresponding video pixels.
Display control circuits embodying the principles of the present invention have substantial advantages over the prior art. In particular, output control circuits built in accordance with the principles of the present invention allow for the flexible display of both graphics and video on the same screen. In particular, pixel position comparison circuitry 305 along with video window position control circuits 308 and 310 and CRT position control circuitry 309 allow for one or more windows from off-screen memory to be generated in specific areas of a display screen to the exclusion of any simultaneously generated data from on-screen memory. Further, color comparison circuitry 302 operating in conjunction with an overlay color key written into overlay color key register 303 allows window data to be presented on the display screen, to the exclusion of any concurrently generated graphics data, without the need for precise x- and y-position data for the window. Finally, the use of the graphics data from the graphics pipeline 205 to control the output overlay provides additional advantages since the video data can be subject to graininess and noise.
Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US3618035||Apr 17, 1969||Nov 2, 1971||Bell Telephone Labor Inc||Video-telephone computer graphics system|
|US3673324||Dec 22, 1970||Jun 27, 1972||Nippon Electric Co||Video mixing/special effects amplifiers|
|US4425581||Apr 17, 1981||Jan 10, 1984||Corporation For Public Broadcasting||System for overlaying a computer generated video signal on an NTSC video signal|
|US4498098||Jun 2, 1982||Feb 5, 1985||Digital Equipment Corporation||Apparatus for combining a video signal with graphics and text from a computer|
|US4599611||Jun 2, 1982||Jul 8, 1986||Digital Equipment Corporation||Interactive computer-based information display system|
|US4719503||Oct 14, 1986||Jan 12, 1988||Rca Corporation||Display processor with color matrixing circuitry and two map memories storing chrominance-only data|
|US4740832||Oct 14, 1986||Apr 26, 1988||Technology, Inc., 64||Image storage using separately scanned luminance and chrominance variables|
|US4745462||Mar 2, 1987||May 17, 1988||Rca Corporation||Image storage using separately scanned color component variables|
|US4771279||Jul 10, 1987||Sep 13, 1988||Silicon Graphics, Inc.||Dual clock shift register|
|US4779144||Mar 2, 1987||Oct 18, 1988||Technology Inc., 64||Image storage using separately scanned luminance-detail and narrowband color-component variables|
|US4839728||Mar 23, 1987||Jun 13, 1989||Rca Licensing Corporation||Picture-in-picture video signal generator|
|US4862269||Jul 20, 1988||Aug 29, 1989||Sony Corporation||Memory control apparatus|
|US4868548||Sep 29, 1987||Sep 19, 1989||Brooktree Corporation||Clock synchronization system|
|US4876600||Jan 26, 1988||Oct 24, 1989||Ibp Pietzsch Gmbh||Method and device for representing a composite image on a screen of a screen device|
|US4878117||Feb 12, 1988||Oct 31, 1989||Ricoh Company, Ltd.||Video signal mixing unit for simultaneously displaying video signals having different picture aspect ratios and resolutions|
|US4914509||Mar 3, 1988||Apr 3, 1990||Mitsubishi Denki Kabushiki Kaisha||Color video signal synthesizer|
|US4947257||Oct 4, 1988||Aug 7, 1990||Bell Communications Research, Inc.||Raster assembly processor|
|US4991122||Aug 31, 1989||Feb 5, 1991||General Parametrics Corporation||Weighted mapping of color value information onto a display screen|
|US4994912||Feb 23, 1989||Feb 19, 1991||International Business Machines Corporation||Audio video interactive display|
|US4994914||Dec 12, 1989||Feb 19, 1991||Digital Equipment Corporation||Composite video image device and related method|
|US5001469||Jun 29, 1988||Mar 19, 1991||Digital Equipment Corporation||Window-dependent buffer selection|
|US5003491||Mar 10, 1988||Mar 26, 1991||The Boeing Company||Multiplying video mixer system|
|US5027212||Dec 6, 1989||Jun 25, 1991||Videologic Limited||Computer based video/graphics display system|
|US5065243||Aug 15, 1990||Nov 12, 1991||Kabushiki Kaisha Toshiba||Multi-screen high-definition television receiver|
|US5065346||Dec 3, 1987||Nov 12, 1991||Sony Corporation||Method and apparatus for employing a buffer memory to allow low resolution video data to be simultaneously displayed in window fashion with high resolution video data|
|US5097257||Dec 26, 1989||Mar 17, 1992||Apple Computer, Inc.||Apparatus for providing output filtering from a frame buffer storing both video and graphics signals|
|US5208583||Oct 9, 1990||May 4, 1993||Bell & Howell Publication Systems, Company||Accelerated pixel data movement|
|US5218432||Jan 2, 1992||Jun 8, 1993||Tandy Corporation||Method and apparatus for merging video data signals from multiple sources and multimedia system incorporating same|
|US5220312||Sep 29, 1989||Jun 15, 1993||International Business Machines Corporation||Pixel protection mechanism for mixed graphics/video display adaptors|
|US5225911||May 7, 1991||Jul 6, 1993||Xerox Corporation||Means for combining data of different frequencies for a raster output device|
|US5229852||Jul 9, 1990||Jul 20, 1993||Rasterops Corporation||Real time video converter providing special effects|
|US5229855||Oct 23, 1991||Jul 20, 1993||International Business Machines Corporation||System and method for combining multiple composite video signals|
|US5243433||Jan 6, 1992||Sep 7, 1993||Eastman Kodak Company||Digital image interpolation system for zoom and pan effects|
|US5243447||Jun 19, 1992||Sep 7, 1993||Intel Corporation||Enhanced single frame buffer display system|
|US5245322||Dec 11, 1990||Sep 14, 1993||International Business Machines Corporation||Bus architecture for a multimedia system|
|US5245702||Jul 5, 1991||Sep 14, 1993||Sun Microsystems, Inc.||Method and apparatus for providing shared off-screen memory|
|US5251298||Feb 25, 1991||Oct 5, 1993||Compaq Computer Corp.||Method and apparatus for auxiliary pixel color management using monomap addresses which map to color pixel addresses|
|US5257348||Sep 17, 1992||Oct 26, 1993||Apple Computer, Inc.||Apparatus for storing data both video and graphics signals in a single frame buffer|
|US5258750||Sep 21, 1989||Nov 2, 1993||New Media Graphics Corporation||Color synchronizer and windowing system for use in a video/graphics system|
|US5274753||Apr 19, 1993||Dec 28, 1993||Apple Computer, Inc.||Apparatus for distinguishing information stored in a frame buffer|
|US5291188||Jun 17, 1991||Mar 1, 1994||Sun Microsystems, Inc.||Method and apparatus for allocating off-screen display memory|
|US5294983||May 29, 1991||Mar 15, 1994||Thomson Consumer Electronics, Inc.||Field synchronization system with write/read pointer control|
|US5319388||Jun 22, 1992||Jun 7, 1994||Vlsi Technology, Inc.||VGA controlled having frame buffer memory arbitration and method therefor|
|US5319447||Feb 11, 1993||Jun 7, 1994||Sip-Societa Italiana Per L'esercizio Delle Telecommunicazioni P.A.||Video control circuit for multimedia applications with video signal synchronizer memory|
|US5327243||May 30, 1991||Jul 5, 1994||Rasterops Corporation||Real time video converter|
|US5335321||Jun 19, 1992||Aug 2, 1994||Intel Corporation||Scalable multimedia platform architecture|
|US5341318||Dec 1, 1992||Aug 23, 1994||C-Cube Microsystems, Inc.||System for compression and decompression of video data using discrete cosine transform and coding techniques|
|US5341442 *||Aug 24, 1993||Aug 23, 1994||Supermac Technology, Inc.||Method and apparatus for compression data by generating base image data from luminance and chrominance components and detail image data from luminance component|
|US5345252||Jul 19, 1991||Sep 6, 1994||Silicon Graphics, Inc.||High speed cursor generation apparatus|
|US5351087||May 29, 1991||Sep 27, 1994||Thomson Consumer Electronics, Inc.||Two stage interpolation system|
|US5365278||Feb 10, 1994||Nov 15, 1994||Thomson Consumer Electronics||Side by side television pictures|
|US5381347 *||Dec 21, 1992||Jan 10, 1995||Microsoft Corporation||Method and system for displaying images on a display device using an offscreen video memory|
|US5396263||Mar 10, 1992||Mar 7, 1995||Digital Equipment Corporation||Window dependent pixel datatypes in a computer video graphics system|
|US5402147||Oct 30, 1992||Mar 28, 1995||International Business Machines Corporation||Integrated single frame buffer memory for storing graphics and video data|
|US5402506||Jun 23, 1994||Mar 28, 1995||Pixel Semiconductor, Inc.||Apparatus for quantizing pixel information to an output video display space|
|US5402513||Jun 27, 1994||Mar 28, 1995||Pixel Semiconductor, Inc.||Video window generator with scalable video|
|US5406306||Feb 5, 1993||Apr 11, 1995||Brooktree Corporation||System for, and method of displaying information from a graphics memory and a video memory on a display monitor|
|US5410547||Jun 17, 1993||Apr 25, 1995||Cirrus Logic, Inc.||Video controller IC with built-in test circuit and method of testing|
|US5420643||May 31, 1994||May 30, 1995||Thomson Consumer Electronics, Inc.||Chrominance processing system for compressing and expanding video data|
|US5426731||Jul 19, 1994||Jun 20, 1995||Fuji Photo Film Co., Ltd.||Apparatus for processing signals representative of a computer graphics image and a real image|
|US5430486||Aug 17, 1993||Jul 4, 1995||Rgb Technology||High resolution video image transmission and storage|
|US5432905||Feb 10, 1994||Jul 11, 1995||Chips And Technologies, Inc.||Advanced asyncronous video architecture|
|US5434590||Oct 14, 1993||Jul 18, 1995||International Business Machines Corporation||Multimedia system|
|US5434676||Dec 17, 1993||Jul 18, 1995||Pioneer Electronic Corporation||Apparatus for mixing video signals having different numbers of lines|
|US5440683||Oct 20, 1994||Aug 8, 1995||Cirrus Logic, Inc.||Video processor multiple streams of video data in real-time|
|US5455626||Nov 15, 1993||Oct 3, 1995||Cirrus Logic, Inc.||Apparatus, systems and methods for providing multiple video data streams from a single source|
|US5455628||Sep 16, 1993||Oct 3, 1995||Videologic Limited||Converter to convert a computer graphics signal to an interlaced video signal|
|US5469221||Aug 23, 1994||Nov 21, 1995||Seiko Epson Corporation||Video multiplexing system for superimposition of scalable video data streams upon a background video data stream|
|US5473573||May 9, 1994||Dec 5, 1995||Cirrus Logic, Inc.||Single chip controller-memory device and a memory architecture and methods suitable for implementing the same|
|US5488390||Jul 29, 1993||Jan 30, 1996||Cirrus Logic, Inc.||Apparatus, systems and methods for displaying a cursor on a display screen|
|US5502837||Aug 11, 1992||Mar 26, 1996||Sun Microsystems, Inc.||Method and apparatus for clocking variable pixel frequencies and pixel depths in a memory display interface|
|US5506604||Apr 6, 1994||Apr 9, 1996||Cirrus Logic, Inc.||Apparatus, systems and methods for processing video data in conjunction with a multi-format frame buffer|
|US5510843||Sep 30, 1994||Apr 23, 1996||Cirrus Logic, Inc.||Flicker reduction and size adjustment for video controller with interlaced video output|
|US5537128||Aug 4, 1993||Jul 16, 1996||Cirrus Logic, Inc.||Shared memory for split-panel LCD display systems|
|US5539464||Jun 7, 1995||Jul 23, 1996||Cirrus Logic, Inc.||Apparatus, systems and methods for generating displays from dual source composite data streams|
|US5539465||Jun 7, 1995||Jul 23, 1996||Cirrus Logic, Inc.||Apparatus, systems and methods for providing multiple video data streams from a single source|
|US5542038||Jul 29, 1993||Jul 30, 1996||Cirrus Logic, Inc.||Method and system for generating dynamic zoom codes|
|US5543842||Jun 7, 1995||Aug 6, 1996||Cirrus Logic, Inc.||Apparatus and method for providing multiple video data streams from a single source|
|US5546531||Apr 20, 1995||Aug 13, 1996||Intel Corporation||Visual frame buffer architecture|
|US5553220||Sep 7, 1993||Sep 3, 1996||Cirrus Logic, Inc.||Managing audio data using a graphics display controller|
|US5557302||May 4, 1995||Sep 17, 1996||Next, Inc.||Method and apparatus for displaying video data on a computer display|
|US5559954||Mar 29, 1995||Sep 24, 1996||Intel Corporation||Method & apparatus for displaying pixels from a multi-format frame buffer|
|US5577203||Mar 13, 1995||Nov 19, 1996||Cirrus Logic, Inc.||Video processing methods|
|US5581279||Nov 5, 1993||Dec 3, 1996||Cirrus Logic, Inc.||VGA controller circuitry|
|US5581280||Mar 13, 1995||Dec 3, 1996||Cirrus Logic, Inc.||Video processing apparatus, systems and methods|
|US5586306||May 23, 1995||Dec 17, 1996||Cirrus Logic, Inc.||Integrated circuit servo system control for computer mass storage device with distributed control functionality to reduce transport delay|
|US5608864||Apr 29, 1994||Mar 4, 1997||Cirrus Logic, Inc.||Variable pixel depth and format for video windows|
|US5625379||Mar 13, 1995||Apr 29, 1997||Cirrus Logic, Inc.||Video processing apparatus systems and methods|
|US5640332||May 16, 1996||Jun 17, 1997||Brooktree Corporation||Multimedia graphics system|
|US5646651||Dec 14, 1994||Jul 8, 1997||Spannaus; John||Block mode, multiple access multi-media/graphics memory|
|US5701270||Feb 1, 1996||Dec 23, 1997||Cirrus Logic, Inc.||Single chip controller-memory device with interbank cell replacement capability and a memory architecture and methods suitble for implementing the same|
|US5752010||Sep 10, 1993||May 12, 1998||At&T Global Information Solutions Company||Dual-mode graphics controller with preemptive video access|
|US5821918||Mar 13, 1995||Oct 13, 1998||S3 Incorporated||Video processing apparatus, systems and methods|
|1||"110 MHz Monolithic CMOS Video CacheDAC(TM)", Product Overview ATI019959-020014.|
|2||"110 MHz Monolithic CMOS Video CahceDAC(TM)", Product Overview ATI019895-019958.|
|3||"135 MHz Monolithic CMOS Video CacheDAC(TM)", Product Overview ATI020015-020068.|
|4||"82750PD Video Processor Programmer's Reference Manual", Intel , Sep. 1993.|
|5||"Chip Designer Brooktree Corp Comes Out with Two Devices to Facilitate Cheap Desktop Video Design", Computergram International; Feb. 12, 1993; p. N/A.|
|6||"CL-PX2070: Preliminary Data Book", Jul. 1993, ATI019027-ATI019142.|
|7||"CL-PX2080 Media DAC Preliminary Data Sheet," Nov. 1992, ATI023971-024065.|
|8||"CL-PX2080: MediaDAC(TM)", May 29, 1992, ATI020370-ATI020373.|
|9||"CL-PX2080: Preliminary Product Bulletin", Aug. 1992, ATI020381-ATI020384.|
|10||"CL-PX2085 Preliminary Data Book," dated Oct. 1993, Pixel Semiconductor, ATI011566-011628.|
|11||"Expert Report of William G. Mears".|
|12||"Intel-ATI in Race Against VESA Bus," Electronic Engineering Times, ATI026182-026183.|
|13||"Intel's i750(R) Video Processor-The Programmable Solution," Manepally, et al. Intel Princeton Operation ATI 057155-057160.|
|14||"Multimedia and Supercomputing Processors", Intel Corporation, 1991.|
|15||"OTI-107 Data Book," version 1.0, Oak Technology, Inc. Apr. 6, 1993. ATI17515-520 (Apr. 1993 Data Book.).|
|16||"Parallax 1280 Reference Manual", Parallax Graphics, ATI32068-32766.|
|17||"Parallax Graphics VIPER Reference Manual", Parallax Graphics, ATI031566-32067.|
|18||"PxVPS Programmer's Reference Manual," Oct. 1993, Pixel Semiconductor, CL049172-049363.|
|19||"Serpents in Paradise", UNIX Review, 110 vol. 7, No. 9, ATI019425-26.|
|20||"Sony Picks Parallax Video Boards", Computergram International, ATI020575.|
|21||"Spitfire 64-bit Multimedia GUI Accelerator-Preliminary Specification", Oak Technology, Sep. 1994.|
|22||"SPITFIRE-64-bit Multimedia GUI Accelerator-OTI-64107 Preliminary Specification," Oak Technology, Jan. 14, 1994, ATI18036-018197 (Jan. 1994 Spec.).|
|23||"Tseng Labs Video Image Processor (VIPER)-Pre-release Information," Tseng Labs, ATI017499-017502.|
|24||"Tseng LabsET4000/W32I Graphics Accelerator," May 12, 1993, ATI017493-498.|
|25||"User Interface," Guide (ATI020390-94).|
|26||"VIPER Video Image Processor Data Book", Tseng Labs, ATI002393-002487.|
|27||1280 Circuit Schematics, ATI018984-97.|
|28||*||Anthony Cataldo "WD, Cirrus Show Video Playback ICs" Electronic News, Oct. 1994, v40, n2035, p66(1).|
|29||Article entitled "Industry's first Video Cache DAC", publication unavailable, date unavailable.|
|30||*||Bill Snyder "The Cirrus Scar" PC Week, Oct. 1994, v11, n42, pA1 (3).|
|31||Brooktree Advance Information for 135 MHz Monolithic CMOS Video Cache DAC(TM), Brooktree Corporation, Oct. 15, 1993, pp. 1-55.|
|32||*||Dave Bursky "Acceleration puts the 'snap' into graphics" Electronic Design, Jul. 1994, v42, n15, p55(7).|
|33||*||Edge: Work-Group Computing Report, Oct. 1994, v5, n228 p15(1), Edge Publishing.|
|34||ET4000/W32 Graphics Accelerator Data Book, Tseng Labs, ATI002488-ATI002744.|
|35||Gosling et al., "The News Book by Sun Microsystems, Inc.", ATI027719-37.|
|36||Harney, Kevin et al., "82750DB Programming Guide", Intel Corporation, Jan. 7, 1991, pp. 1-63.|
|37||Harney, Kevin et al., "The i750(R) Video Processor: A Total Multimedia Solution", Digital Multimedia Systems, Communications of the ACM, Apr. 1991, vol. 34, No. 4, pp. 65-78.|
|38||*||James Turley "Brooktire reveals Multimedia Plans;" Microprocessor Reprot. Dec. 1994, v8, n16, p. 19 (1).|
|39||*||Jeff Mace "Mainstream graphics accelerators rush power" PC Magazine, Dec. 1994, v13, n21, p239 (17).|
|40||Mirabella, Rich et al., "Sony, Parallax Graphics Agreement Brings Full-Motion Video to Unix Workstations", ATI020577-020578.|
|41||Pico, Marty , "Coprocessors Provide Integrated Video and Graphics", ATI018935-18936.|
|42||Shandle, Jack, "Windows Accelerator Chip Provides Multimedia Port", Electronic Design, Jun. 24, 1993, pp. 45, 46, 48-50.|
|43||*||Steve Undy "A Low-Cost Graphics & Multimedia Workstation Chip Set" IEEE Micro. Apr. 1994, pp. 10-82.|
|44||Viper Circuit Schematics, ATI0118970-83.|
|45||Wilson, Ron, "Brooktree finishes video set", Electronic Engineering Times, Feb. 1, 1993, p. 57.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7793029||May 17, 2005||Sep 7, 2010||Nvidia Corporation||Translation device apparatus for configuring printed circuit board connectors|
|US8021193||Apr 25, 2005||Sep 20, 2011||Nvidia Corporation||Controlled impedance display adapter|
|US8021194||Dec 28, 2007||Sep 20, 2011||Nvidia Corporation||Controlled impedance display adapter|
|US8156014 *||Mar 26, 2001||Apr 10, 2012||Sony Corporation||Communication service method and communication apparatus thereof for transmitting advertisements in a private communication environment|
|US8355081 *||Mar 26, 2008||Jan 15, 2013||Realtek Semiconductor Corp.||Digital display control device and method thereof|
|US8373649||Apr 11, 2008||Feb 12, 2013||Seiko Epson Corporation||Time-overlapping partial-panel updating of a bistable electro-optic display|
|US8412872||Dec 12, 2005||Apr 2, 2013||Nvidia Corporation||Configurable GPU and method for graphics processing using a configurable GPU|
|US8417838||Dec 12, 2005||Apr 9, 2013||Nvidia Corporation||System and method for configurable digital communication|
|US8621039||Mar 16, 2012||Dec 31, 2013||Sony Corporation||Communication service method and communication apparatus thereof|
|US8723891 *||Aug 23, 2010||May 13, 2014||Ncomputing Inc.||System and method for efficiently processing digital video|
|US20010039520 *||Mar 26, 2001||Nov 8, 2001||Motoki Nakade||Communication service method and communication apparatus thereof|
|US20050285868 *||Jun 15, 2005||Dec 29, 2005||Atsushi Obinata||Display controller, electronic appliance, and method of providing image data|
|US20080239147 *||Mar 26, 2008||Oct 2, 2008||Realtek Semiconductor Corp.||Digital display control device and method thereof|
|US20110080519 *||Apr 7, 2011||Ncomputing Inc.||System and method for efficiently processing digital video|
|U.S. Classification||345/546, 345/548, 345/531, 345/506, 345/634, 345/558, 345/505, 345/519|
|International Classification||G09G5/397, G06T11/20, G06T1/20, G09G5/36, G09G5/39, G06F15/80, G06F15/76|
|Aug 4, 2008||REMI||Maintenance fee reminder mailed|
|Sep 17, 2008||FPAY||Fee payment|
Year of fee payment: 12
|Sep 17, 2008||SULP||Surcharge for late payment|
Year of fee payment: 11