WO2000011562A1 - Apparatus and method for performing setup operations in a 3-d graphics pipeline using unified primitive descriptors - Google Patents

Apparatus and method for performing setup operations in a 3-d graphics pipeline using unified primitive descriptors Download PDF

Info

Publication number
WO2000011562A1
WO2000011562A1 PCT/US1999/019240 US9919240W WO0011562A1 WO 2000011562 A1 WO2000011562 A1 WO 2000011562A1 US 9919240 W US9919240 W US 9919240W WO 0011562 A1 WO0011562 A1 WO 0011562A1
Authority
WO
WIPO (PCT)
Prior art keywords
tile
vertices
setup
primitive
edge
Prior art date
Application number
PCT/US1999/019240
Other languages
French (fr)
Other versions
WO2000011562B1 (en
Inventor
Jerome F. Duluk, Jr.
Richard E. Hessel
Vaughn T. Arnold
Jack Benkual
George Cuan
Steven L. Dodgen
Emerson S. Fang
Hengwei Hsu
Sushma S. Trivedi
Original Assignee
Apple Computer, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apple Computer, Inc. filed Critical Apple Computer, Inc.
Publication of WO2000011562A1 publication Critical patent/WO2000011562A1/en
Publication of WO2000011562B1 publication Critical patent/WO2000011562B1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/30Clipping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/40Hidden part removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading
    • G06T15/83Phong shading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading
    • G06T15/87Gouraud shading

Definitions

  • PROCESSOR WITH PIPELINE STATE STORAGE AND RETRIEVAL (Atty. Doc.
  • the present invention relates generally to computer structure and method for processing three-dimensional ("3-D") computer graphics in a 3-D graphics processor. More particularly, the present invention is directed to a computer structure and method for performing setup operations in a tiled graphics pipeline architecture using unified primitive descriptors, post tile sorting setup, and tile relative x-values and screen relative y-values.
  • 3-D three-dimensional
  • 2-D two-dimensional
  • the object may be a simple geometry primitive such as a point, a line segment, or a polygon.
  • More complex objects can be rendered onto a display device by representing the objects with a series of connected planar polygons, such as, for example, by representing the objects as a series of connected planar triangles.
  • All geometry primitives may eventually be described in terms of one vertex or a set of vertices, for example, coordinate (x, y, z) that defines a point, for example, the endpoint of a line segment, or a corner of a polygon.
  • a generic pipeline is merely a series of cascading processing units, or stages wherein the output from a prior stage, serves as the input for a subsequent stage.
  • these stages include, for example, per- vertex operations, primitive assembly operations, pixel operations, texture assembly operations, rasterization operations, and fragment operations.
  • a tiled architecture is a graphic pipeline architecture that associates image data, and in particular geometry primitives, with regions in a 2-D window, where the 2-D window is divided into multiple equally size regions.
  • Tiled architectures are beneficial because they allow a graphics pipeline to efficiently operate on smaller amounts of image data. In other words, a tiled graphics pipeline architecture presents an opportunity to utilize specialized, higher performance graphics hardware into the graphic pipeline.
  • graphics pipelines that do have tiled architectures do not perform mid- pipeline sorting of the image data with respect to the regions of the 2-D window.
  • Conventional graphics pipelines typically sort image data either, in software at the beginning of a graphics pipelines, before any image data transformations have taken place, or in hardware the very end of the graphics pipeline, after rendering the image into a 2-D grid of pixels.
  • sorting image data at the very beginning of the graphics pipelines typically involves dividing intersecting primitives into smaller primitives where the primitives intersect, and thereby, creating more vertices. It is necessary for each of these vertices to be transformed into an appropriate coordinate space. Typically this is done by subsequent stage of the graphics pipeline. Vertex transformation is computationally intensive. Because none of these vertices have yet been transformed into an appropriate coordinate space, each of these vertices will need to be transformed by a subsequent vertex transformation stage of the graphics pipeline into the appropriate coordinates space. Coordinate spaces are known. As noted above, vertex transformation is computationally intensive. Increasing the number of vertices by subdividing primitives before transformation, slows down the already slow vertex transformation process.
  • the z-value is a measure of the distance from the eyepoint to the point on the object represented by the pixel with which the z-value corresponds. Removing primitives or parts of primitives that are occluded by other geometry is beneficial because it optimizes a graphic pipeline by processing only those image data that will be visible. The process of removing hidden image data is called culling.
  • tiled graphics pipelines that do have tiled architectures do not perform culling operations. Because, as discussed in greater detail above, it is desirable to sort image data mid-pipeline, after image data coordinate transformations have taken place, and before the image data has been texture mapped and/or rasterized, it is also desirable to remove hidden pixels from the image data before the image data has been texture mapped and/or rasterized. Therefore, what is also needed is a tiled graphics pipeline architecture that performs not only, mid-pipeline sorting, but mid-pipeline culling.
  • Such image data information includes, for example, providing the culling unit those vertices defining the intersection of a primitive with a tile's edges. To accomplish this, the image data must be clipped to a tile. This information should be sent to the mid-pipeline culling unit. Therefore, because a mid-pipeline cull unit is novel and its input requirements are unique, what is also needed, is a structure and method for a mid-pipeline host file sorting setup unit for setting up image data information for the mid-pipeline culling unit.
  • the logic in a mid-pipeline culling unit in a tiled graphics pipeline architecture be as high performance and streamlined as possible.
  • the logic in a culling unit can be optimized for high performance by reducing the number of branches in its logical operations.
  • conventional culling operations typically include logic, or algorithms to determine which of a primitive's vertices lie within a tile, hereinafter referred to as a vertex/tile intersection algorithm.
  • Conventional culling operations typically implement a number of different vertices/tile intersection algorithms to accomplish this, one algorithm for each primitive type.
  • a beneficial aspect of needing only one such algorithm to determine whether a line segment's or a triangle's vertices lie within a tile, as compared requiring two such algorithms (one for each primitive type), is that total number of branches in logic implementing such vertex/tile intersection algorithms are reduced.
  • one set of algorithms/set of equations/set of hardware could be used to perform the vertex/tile intersection algorithm for a number of different primitive types.
  • stages of a graphics pipeline could also benefit in a similar manner from a procedure for representing different primitives as a single primitive type, while still retaining each respective primitive type unique geometric - 0 - information.
  • a processing stage that sets up information for a culling unit could also share a set of algorithms/set of equations/set of hardware for calculating different primitive information.
  • geometry primitive vertices or x-coordinates and y-coordinates, are typically passed between pipeline stages in screen based coordinates.
  • x-coordinates and y-coordinates are represented as integers having a limited number of fractional bits (sub pixel bits).
  • tile based graphics pipeline architectures have been limited by sorting image data either prior to the graphics pipeline or in hardware at the end of the graphics pipeline, no tile based graphics pipeline architecture culling units, no mid- pipeline post tile sorting setup units to support tile based culling operations, and larger vertices memory storage requirements.
  • the present invention overcomes the limitations of the state-of-the-art by providing structure and method in a tile based graphics pipeline architecture for: (a) a mid-pipeline post tile sorting setup unit that supplies a mid-pipeline cull unit with tile relative image data information; (b) a unified primitive descriptor language for representing triangles and line segments as quadrilaterals and thereby reducing logic branching requirements of a mid-pipeline culling unit; and, (c) representing each of a primitive's vertices in tile relative y-values and screen relative x-values, and thereby reducing the number of bits that need to be passed to subsequent stages of the graphics pipeline accurately, and efficiently represent a primitive's vertices.
  • a mid-pipeline setup unit is one processing stage of a tile based 3- D graphics pipeline.
  • the mid-pipeline setup unit processes image data in preparation for a subsequent mid-pipeline culling unit.
  • a mid-pipeline sorting unit, previous to the mid-pipeline setup unit has already sorted the image data with respect to multiple tiles comprising a 2-D window.
  • the image data including vertices describing a primitive.
  • the mid-pipeline setup unit is adapted to determine a set of clipping points that identify an intersection of the primitive with the tile, and also adapted to compute a minimum depth value for that part of the primitive intersecting the tile.
  • the primitives x- coordinates are screen based and the y-coordinates are tile based.
  • the mid-pipeline setup unit is adapted to represent line segments and triangles as rectangles. Both line segments and triangles in this embodiment are described with respective sets of four vertices. In the case of triangles, not all of the vertices are needed to describe the triangle, one vertice will be will be degenerate, or not described.
  • FIG. 1 is a block diagram illustrate aspects of a system according to an embodiment of the present invention, for performing setup operations in a 3-D graphics pipeline using unified primitive descriptors, post tile sorting setup, tile relative y-values, and screen relative x-values;
  • FIG. 2 is a block diagram illustrating aspects of a graphics processor according to an embodiment of the present invention, for performing setup operations in a 3-D graphics pipeline using unified primitive descriptors, post tile sorting setup, tile relative y-values, and screen relative x-values;
  • FIG. 3 is a block diagram illustrating other processing stages 210 of graphics pipeline 200 according to a preferred embodiment of the present invention.
  • FIG. 4 is a block diagram illustrate Other Processing Stages 220 of graphics pipeline 200 according to a preferred embodiment of the present invention
  • FIG. 5 illustrates vertex assignments according to a uniform primitive description according to one embodiment of the present invention, for describing polygons with an inventive descriptive syntax
  • FIG. 8 illustrates a block diagram of functional units of setup 2155 according to an embodiment of the present invention, the functional units implementing the methodology of the present invention
  • FIG. 9 illustrates use of triangle slope assignments according to an embodiment of the present invention.
  • FIG. 10 illustrates slope assignments for triangles and line segments according to an embodiment of the present invention
  • FIG. 11 illustrates aspects of line segments orientation according to an embodiment of the present invention
  • FIG. 12 illustrates aspects of line segments slopes according to an embodiment of the present invention
  • FIG. 13 illustrates aspects of point preprocessing according to an embodiment of the present invention
  • FIG. 14 illustrates the relationship of trigonometric functions to line segment orientations
  • FIG. 15 illustrates aspects of line segment quadrilateral generation according to embodiment of the present invention
  • FIG. 16 illustrates examples of x-major and y-major line orientation with respect to aliased and anti-aliased lines according to an embodiment of the present invention
  • FIG. 17 illustrates presorted vertex assignments for quadrilaterals
  • FIG. 18 illustrates a primitives clipping points with respect to the primitives intersection with a tile
  • FIG. 19 illustrates aspects of processing quadrilateral vertices that lie outside of a 2-D window according to and embodiment of the present mention
  • FIG. 20 illustrates an example of a triangle's minimum depth value vertex candidates according to embodiment of the present invention
  • FIG. 21 illustrates examples of quadrilaterals having vertices that lie outside of a 2-D window range
  • FIG. 22 illustrates aspects of clip code vertex assignment according to embodiment of the present invention.
  • FIG. 23 illustrates aspects of unified primitive descriptor assignments, including corner flags, according to an embodiment of the present invention. 5. Detailed Description of Preferred Embodiments of the Invention
  • the numerical precision of the calculations of the present invention are based on the precision requirements of previous and subsequent stages of the graphics pipeline.
  • the numerical precision to be used depends on a number of factors. Such factors include, for example, order of operations, number of operations, screen size, tile size, buffer depth, sub pixel precision, and precision of data. Numerical precision issues are known, and for this reason will not be described in greater detail herein.
  • a mid-pipeline post tile sorting setup that supports a mid-pipeline sorting unit and supports a mid-pipeline culling unit; (2) a procedure for uniformly describing primitives that allows different types of primitives to share common sets of algorithms/equations/hardware elements in the graphics pipeline; and, (3) tile-relative y-values and screen-relative x-values that allow representation of spatial data on a region by region bases that is efficient and feasible for a tiled based graphics pipeline architecture.
  • FIG. 1 there is shown an embodiment of system 100, for performing setup operations in a 3-D graphics pipeline using unified primitive descriptors, post tile sorting setup, tile relative x-values, and screen relative y-values.
  • FIG.l illustrates how various software and hardware elements cooperate with each other.
  • System 100 utilizes a programmed general-purpose computer 101, and 3-D graphics processor 117.
  • Computer 101 is generally conventional in design, comprising: (a) one or more data processing units ("CPUs") 102; (b) memory 106a, 106b and 106c, such as fast primary memory 106a, cache memory 106b, and slower secondary memory 106c, for mass storage, or any combination of these three types of memory; (c) optional user interface 105, including display monitor 105 a, keyboard 105b, and pointing device 105c; (d) graphics port 114, for example, an advanced graphics port (“AGP"), providing an interface to specialized graphics hardware; (e) 3- D graphics processor 117 coupled to graphics port 114 across I/O bus 112, for providing high-performance 3-D graphics processing; and (e) one or more communication busses 104, for interconnecting CPU 102, memory 106, specialized graphics hardware 114, 3-D graphics processor 117, and optional user interface 105.
  • CPUs data processing units
  • memory 106a, 106b and 106c such as fast primary memory 106a, cache memory 106b,
  • I/O bus 112 can be any type of peripheral bus including but not limited to an advanced graphics port bus, a Peripheral Component Interconnect (PCI) bus, Industry Standard Architecture (ISA) bus, Extended Industry Standard Architecture (EISA) bus, MicroChannel Architecture, SCSI Bus, and the like.
  • PCI Peripheral Component Interconnect
  • ISA Industry Standard Architecture
  • EISA Extended Industry Standard Architecture
  • SCSI Bus Serial Bus
  • I O bus 112 is an advanced graphics port pro.
  • the present invention also contemplates that one embodiment of computer 101 may have a command buffer (not shown) on the other side of graphics port 114, for queuing graphics hardware I/O directed to graphics processor 117.
  • a command buffer (not shown) on the other side of graphics port 114, for queuing graphics hardware I/O directed to graphics processor 117.
  • Memory 106a typically includes operating system 108 and one or more application programs 110, or processes, each of which typically occupies a separate address space in memory 106 at runtime.
  • Operating system 108 typically provides basic system services, including, for example, support for an Application Program Interface ("API") for accessing 3-D graphics API's such as Graphics Device Interface, DirectDraw/Direct3-D and OpenGL. DirectDraw/Direct 3-D, and OpenGL are all well-known APIs, and for that reason are not discussed in greater detail herein.
  • the application programs 110 may, for example, include user level programs for viewing and manipulating images.
  • a laptop or other type of computer a workstation on a local area network connected to a server, or a dedicated gaming console can be used instead of computer can also be used in connection with the present invention.
  • Computer 101 simply serves as a convenient interface for receiving and transmitting messages to 3-D graphics processor 117.
  • 3-D graphics processor 117 which may be provided as a separate PC Board within computer 101, as a processor integrated onto the motherboard of computer 101, or as a stand-alone processor, coupled to graphics port 114 across I/O bus 112, or other communication link.
  • Setup 215 is implemented as one processing stage of multiple processing stages in graphics processor 117. (Setup 215 corresponds to "setup stage 8000," as illustrated in United States Provisional Patent Application Serial Number 60/097,336).
  • Setup 215 is connected to other processing stages 210 across internal bus 211 and signal line 212.
  • Setup 215 is connected to other processing stages 220 across internal bus 216 and signal line 217.
  • Internal bus 211 and internal bus 216 can be any type of peripheral bus including but not limited to a Peripheral Component Interconnect (PCI) bus, Industry Standard Architecture (ISA) bus, Extended Industry Standard Architecture (EISA) bus, MicroChannel Architecture, SCSI Bus, and the like.
  • PCI Peripheral Component Interconnect
  • ISA Industry Standard Architecture
  • EISA Extended Industry Standard Architecture
  • SCSI Bus and the like.
  • internal bus 211 is a dedicated on-chip bus.
  • FIG. 3 there is shown an example of a preferred embodiment of other processing stages 210, including, command fetch and decode 305, geometry 310, mode extraction 315, and sort 320.
  • command fetch and decode 305 includes command fetch and decode 305, geometry 310, mode extraction 315, and sort 320.
  • Cmd Fetch / Decode 305 handles communications with host computer 101 through graphics port 114.
  • CFD 305 sends 2-D screen based data, such as bitmap blit window operations, directly to backend 440 (see FIG. 4), because 2-D data of this type does not typically need to be processed further with respect to the other processing stage in other processing stages 210 or Other Processing Stages 220. All 3-D operation data (e.g., necessary transform matrices, material and light parameters and other mode settings) are sent by CFD 305 to the geometry 310.
  • Geometry 310 performs calculations that pertain to displaying frame geometric primitives, hereinafter, often referred to as "primitives,” such as points, line segments, and triangles, in a 3-D model. These calculations include transformations, vertex lighting, clipping, and primitive assembly. Geometry 310 sends "properly oriented" geometry primitives to mode extraction 315.
  • primary such as points, line segments, and triangles
  • Mode extraction 315 separates the input data stream from geometry 310 into two parts: (1) spatial data, such as frame geometry coordinates, and any other information needed for hidden surface removal; and, (2) non-spatial data, such as color, texture, and lighting information. Spatial data are sent to sort 320. The non- spatial data are stored into polygon memory (not shown). (Mode injection 415 (see FIG. 4) with pipeline 200).
  • Sort 320 sorts vertices and mode information with respect multiple regions in a 2-D window. Sort 320 outputs the spatially sorted vertices and mode information on a region-by-region basis to setup 215.
  • the details of processing stages 210 are not necessary to practice the present invention, and for that reason other processing stages 210 are not discussed in further detail here.
  • FIG. 4 there is shown an example of a preferred embodiment of other processing stages 220, including, cull 410, mode injection 415, fragment 420, texture 425, Phong Lighting 430, pixel 435, and backend 440.
  • the details of each of the processing stages in Other Processing Stages 220 is not necessary to practice the present invention. However, for purposes of completeness, we will now briefly discuss each of these processing stages.
  • Cull 410 receives data from a previous stage in the graphics pipeline, such as setup 405, in region-by-region order, and discards any primitives, or parts of primitives that definitely do not contribute to the rendered image. Cull 410 outputs spatial data that are not hidden by previously processed geometry.
  • Mode injection 415 retrieves mode information (e.g., colors, material properties, etc..) from polygon memory, such as other memory 235, and passes it to a next stage in graphics pipeline 200, such as fragment 420, as required.
  • Fragment 420 interprets color values for Gouraud shading, surface normals for Phong shading, texture coordinates for texture mapping, and interpolates surface tangents for use in a bump mapping algorithm (if required).
  • Texture 425 applies texture maps, stored in a texture memory, to pixel fragments.
  • Phong 430 uses the material and lighting information supplied by mode injection 425 to perform Phong shading for each pixel fragment.
  • Pixel 435 receives visible surface portions and the fragment colors and generates the final picture.
  • backend 139 receives a tile's worth of data at a time from pixel 435 and stores the data into a frame display buffer.
  • Setup 215 receives a stream of image data from a previous processing stage of pipeline 200
  • the previous processing stage is sort 320 (see FIG. 3).
  • These image data include spatial information about geometric primitives (hereinafter, often referred to as "primitives") to be rendered by pipeline 200.
  • the primitives received from sort 320 can include, for example, filled triangles, line triangles, lines, stippled lines, and points.
  • These image data also include mode information, information that does not necessarily apply to any one particular primitive, but rather, probably applies to multiple primitives. Mode information is not processed by the present invention, but simply passed through to a subsequent stage of pipeline 200, for example, cull 410, and for this reason will not be discussed further detail herein.
  • setup 215 receives the image data from Sort 320, the primitives have already been sorted, by sort 320, with respect to regions in a 2D window that are intersected by the respective primitives.
  • Setup 215 receives this image data on a region-by-region basis. That is to say that all the primitives that intersect a respective region will be sent to setup 215 before all the primitives that intersect a different respective region are sent to setup 215, and so on.
  • sort 320 may send the same primitive many times, once for each region it intersects, or "touches.”
  • each region of the 2-D window is a rectangular tile.
  • Setup 215 receives the image data from sort 320 either organized in "time order” or in "sorted transparency order.”
  • time order the time order of receipt by all previous processing stages of pipeline 200 of the vertices and modes within each tile is preserved. That is, for a given tile, vertices and modes are read out of previous stages of pipeline 200 just as they were received, with the exception of when sort 320 is in sorted transparency mode.
  • "guaranteed opaque" primitives are received by setup 215 first, before setup 215 receives potentially transparent geometry.
  • guaranteed opaque means that a primitive completely obscures more distant primitives that occupies the same spatial area in a window.
  • Potentially transparent geometry is any geometry that is not guaranteed opaque.
  • Setup 215 prepares the incoming image data for processing by cull 410.
  • Setup 215 processes one tile's worth of image data, one primitive at a time. When it's done processing a primitive, it sends the data to cull 420 (see FIG. 4) in the form of a primitive packet 6000 (see Table 6).
  • Each primitive packet 6000 output from setup 215 represents one primitive: a triangle, line segment, or point.
  • Cull 410 produces the visible stamp portions, or "VSPs" used by subsequent processing stages in pipeline 200.
  • a stamp is a region two pixels by two pixels in dimension; one pixel contains four sample points; and, one tile has 16 stamps (8x8).
  • any convenient number of pixels in a stamp, sample, points in a pixel, and pixels in a tile may be used.
  • Cull 410 receives image data from setup 215 in tile order (in fact in the order that setup 215 receives the image data from sort 320), and culls out those primitives and parts of primitives that definitely do not contribute to a rendered image. Cull 410 accomplishes this in two stages, the MCC AM cull 410 stage and the Z cull 410 stage. MCC AM cull 410, allows detection of those memory elements in a rectangular, spatially addressable memory array whose "content" (depth values) are greater than a given value. Spatially addressable memory is known.
  • setup 215 To prepare the incoming image data for processing by MCCAM cull, setup 215, for each primitive: (a) determines the dimensions of a tight bounding box that circumscribes that part of the primitive that intersects a tile; and, (b) computes a minimum depth value "Zmin,”for that part of the primitive that intersects the tile.
  • Zmin minimum depth value
  • MCCAM cull 410 uses the dimensions of the bounding box and the minimum depth value to determine which of multiple "stamps," each stamp lying within the dimensions of the bounding box, may contain depth values less than Zmin.
  • the procedures for determining the dimensions of a bounding box and the procedures for producing a minimum depth value are described in greater detail below. (For purposes of simplifying the description, those stamps that lie within the dimensions of the bounding box are hereinafter, referred to as "candidate stamps.")
  • Z cull 410 refines the work performed by MCCAM cull 410 in the process of determining which samples are visible, by taking these candidates stamps, and if they are part of the primitive, computing the actual depth value for samples in that stamp. This more accurate depth value is then compared, on a sample-by-sample basis, to the z- values stored in a z-buffer memory in cull 410 to determine if the sample is visible.
  • a sample-by-sample basis simply means that each sample is compared individually, as compared to a step where a whole bounding box is compared at once.
  • setup 215 For those primitives that are lines and triangles, setup 215 also calculates spatial derivatives.
  • a spatial derivative is a partial derivative of the depth value. Spatial derivatives are also known as Z-slopes, or depth gradients.
  • the minimum depth value and a bounding box are utilized by MCCAM cull 410.
  • Setup 215 also determines a reference stamp in the bounding box (described in greater detail below) that contains the vertex with the minimum z-value (discussed in greater detail below in section 5.4.10).
  • the depth gradients and zref are used by Z-cull 410. Line (edge) slopes, intersections, and corners (top and bottom) are used by Z-cull 410 for edge walking. - lo -
  • Setup 215 interfaces with a previous stage of pipeline 200, for example, sort 320 (see FIG. 3), and a subsequent stage of pipeline 200, for example, cull 410 (see FIG. 4).
  • sort 320 output packets.
  • begin frame packet 1000 for delimiting the beginning of a frame of image data.
  • Begin frame packet 1000 is received by setup 215 from sort 320.
  • begin tile packet 2000 for delimiting the beginning of that particular tile's worth of image data.
  • FIG. 4 there a shown an example of a clear packet 4000, for indicating a buffer clear event.
  • table 5 there is shown an example of a cull packet 5000, for indicating, among other things the packet type 5010.
  • table 6 there is shown an example of an end frame packet 6000, for indicating by sort 320, the end of a frame of image data.
  • table 7 there is shown an example of a primitive packet 7000, for identifying information with respect to a primitive. Sort 320 sends one primitive packet 7000 to setup 215 for each primitive.
  • setup output primitive packet 8000 for indicating to a subsequent stage of pipeline 200, for example, cull 410, a primitive's information, including, information determined by setup 215.
  • setup 215 determined information is discussed in greater detail below.
  • geometric primitives including, for example, polygons, lines, and points.
  • Polygons arriving at setup 215 are essentially triangles, either filled triangles or line mode triangles.
  • a filled triangle is expressed as three vertices.
  • a line mode triangle is treated by setup 215 as three individual line segments.
  • Setup 215 receives window coordinates (x, y, z) defining three triangle vertices for both line mode triangles and for filled triangles. Note that the aliased state of the polygon (either aliased or anti-aliased) does not alter the manner in which filled polygon setup is performed by setup 215. Line mode triangles are discussed in greater detail below.
  • Line segments arriving at setup 215 essentially comprise a width, and two end points. Setup 215 does not modify the incoming line widths.
  • a line segment may be stippled.
  • a line segment may be aliased or anti-aliased.
  • a line's width is determined prior to setup 215. For example, it can be determined on a 3-D graphics processing application executing on computer 101 (see FIG. 1).
  • Pipeline 200 renders anti-aliased points as circles and aliased points as squares. Both circles and squares have a width.
  • the determination of a point's size and position are determined in a previous processing stage of pipeline 200, for example, geometry 310.
  • Setup 215 describes each primitive with a set of four vertices. Note that not all vertex values are needed to describe all primitives. To describe a triangle, setup 215 uses a triangle's top vertex, bottom vertex, and either left corner vertex or right corner vertex, depending on the triangle's orientation. A line segment, is treated as a parallelogram, so setup 215 uses all four vertices to describe a line segment.
  • FIG. 16 shows example of quadrilaterals generated for line segments. Note that quadrilaterals are generated differently for aliased and anti-aliased lines. For aliased lines a quadrilateral's vertices also depend on whether the line is x-major or y-major. Note also that while a triangle's vertices are the same as its original vertices, setup 215 generates new vertices to represent a line segment as a parallelogram.
  • the unified representation of primitives uses two sets of descriptors to represent a primitive.
  • the first set includes vertex descriptors, each of which are assigned to the original set of vertices in window coordinates.
  • Vertex descriptors include, VtxYMin, VtxYmax, VtxXmin and VtxXmax.
  • the second set of descriptors are flag descriptors, or corner flags, used by setup 215 to indicate which vertex descriptors have valid and meaningful values.
  • Flag descriptors include, VtxLeftC, VtxRightC, LeftCorner, RightCorner, VtxTopC, VtxBotC, TopCorner, and BottomCorner.
  • FIG. 23 illustrates aspects of unified primitive descriptor assignments, including corner flags. All of these descriptors have valid values for line segment primitives, but all of them may not be valid for triangles.
  • Treating triangles as rectangles involves specifying four vertices, one of which (typically y-left or y-right in one particular embodiment) is degenerate and not specified. To illustrate this, refer to FIG. 22, and triangle 20, where a left corner vertex (VtxLeftC) is degenerate, or not defined. With respect to triangle 10, a right corner vertex (VtxRightC) is degenerate.
  • primitive descriptors according to the teachings of the present invention to describe triangles and line segments as rectangles provides a nice, uniform way to setup primitives, because the same (or similar) algorithms/equations/calculations/hardware can be used to operate on different primitives, such as, for example, edge walking algorithm in cull 410 (see FIG. 4), thus allowing for more streamlined implementation of logic.
  • edge walking algorithm in cull 410 (see FIG. 4)
  • VtxYmax is the vertex with the minimum y value.
  • VtxYmax is the vertex with the maximum y value.
  • VtxLeftC is the vertex that lies to the left of the diagonal formed by joining the vertices VtxYmin and VtxYmax for line segments.
  • VtxRightC is the vertex that lies to the right of the diagonal formed by joining the vertices VtxYmin and VtxYmax for line segments.
  • VtxYmin, VtxYmax, VtxLeftC, VtxRightC, LeftCorner, RightCorner descriptors are obtained for triangles.
  • the vertices are sorted with respect to the y-direction. The procedures for sorting a triangles coordinates with respect to y are discussed in greater detail below in section 5.4.1.1.
  • VtxYmin, the vertex with the minimum y value, and VtxYmax, the vertex with the maximum y value are assigned their respective values in a similar manner as that described immediately above with respect to line segments.
  • a long y-edge is equal to a left edge.
  • a triangle has exactly two edges that share a top most vertex (VtxYmax). Of these two edges, the one edge with an end point furthest left is the left edge. Analogous to this, the one edge with an end point furthest to the right is the right edge.
  • step 25 LeftCorner is set to equal FALSE, meaning that VtxLeftC is degenerate, or not defined. If the long y-edge is not equal to the left edge (step 15), at step 20, procedure for uniformly describing primitives 500 assigns a value to VtxLeftC and sets
  • VtxLeftC is the vertex that lies to the left of the edge of the triangle formed by joining the vertices VtxYmin and VtxYmax (hereinafter, also referred to as the "long y-edge").
  • the procedure for determining whether a triangle has a left corner is discussed in greater detail below 5.4.1.3.
  • VtxRightC is the vertex that lies to the right of the long y-edge in the case of a triangle.
  • the procedure for determining whether a triangle has a right corner is discussed in greater detail below 5.4.1.3. Note that in practice VtxYmin, VtxYmax, VtxLeftC, and VtxRightC are indices into the original primitive vertices.
  • Setup 215 uses VtxYMin, VtxYmax, VtxLeftC, VtxRightC, LeftCorner, and RightCorner to clip a primitive with respect to the top and bottom edges of the tile. Clipping will be described in greater detail below in section 5.4.6.
  • VtxXmin is the vertex with the minimum x value.
  • VtxXmax is the vertex with the maximum x value.
  • VtxTopC is the vertex that lies above the diagonal formed by joining the vertices VtxXmin and VtxXmax for parallelograms.
  • VtxBotC is the vertex that lies below the long x-axis in the case of a triangle, and below the diagonal formed by joining the vertices VtxXmin and VtxXmax.
  • VtxXmin and VtxXmax are assigned values as for the discussion immediately above with respect to line segments.
  • TopCorner is set to equal false indicating that VtxTopC is degenerate, or not defined.
  • the top edge is a triangle has to edges that share the maximum x-vertex (VtxXmax). The topmost of these two edges is the "top edge.” analogous to this, the bottom most of these two edges is the "bottom edge.”
  • VtxTopC is assigned an appropriate value and TopCorner is set to equal TRUE, indicating that VtxTopC contains a valid value.
  • the appropriate value for VtxTopC is the vertex that lies above the edge joining vertices VtxXmin and VtxXmax (hereinafter, this edge is often referred to as the "long x-edge"). The procedure for determining whether a triangle has a top corner is discussed in greater detail below 5.4.1.5.
  • step 30 it is determined whether the long x-edge is equal to the bottom edge, and if so, at step 40, BottomCorner is set to equal FALSE, indicating that VtxBotC is degenerate, or not defined. If the long x-edge is not equal to the bottom edge (step 30), then an appropriate value is assigned to VtxBotC and BottomCorner is set to equal TRUE, indicating that VtxBotC contains a valid value. The appropriate value for VtxBotC is the vertex that lies below the long x-axis. The procedure for determining whether a triangle has a bottom corner is discussed in greater detail below 5.4.1.5.
  • VtxXmin, VtxXmax, VtxTopC, and VtxBotC are indices into the original triangle primitive.
  • Setup 215 uses VtxXmin, VtxXmax, VtxTopC, VtxBotC, TopCorner, and BottomCorner to clip a primitive with respect to the left and right edges of a tile. Clipping will be described in greater detail below.
  • FIG. 23 To illustrate the use of the unified primitive descriptors of the present invention, refer to FIG. 23, where there is shown an illustration of multiple triangles and line segments described using vertex descriptors and flag descriptors according to a preferred embodiment of the unified primitive description of the present invention.
  • Setup's 215 I O subsystem architecture is designed around the need to process primitive and mode information received from sort 315 (see FIG. 3) in a mariner that is optimal for processing by cull 410 (see FIG. 4). To accomplish this task, setup 215 performs a number of procedures to prepare information about a primitive with respect to a corresponding tile for cull 410.
  • triangle preprocessor 2 for generating unified primitive descriptors, calculating line slopes and reciprocal slopes of the three edges, and determining if a triangle has a left or right corner
  • line preprocessor 2 for determining the orientation of a line, calculating the slope of the line and the reciprocal, identifying left and right slopes and reciprocal slopes, and discarding end- on lines
  • point preprocessor 2 for calculating a set of spatial information required by a subsequent culling stage of pipeline 200
  • trigonometric unit 3 for calculating the half widths of a line, and trigonometric unit for processing anti-aliased lines by increasing a specified width to improved image quality
  • quadrilateral generation unit 4 for converting lines into quadrilaterals centered around the line, and for converting aliased points into a square of appropriate width
  • clipping unit 5 for clipping
  • FIG. 8 illustrates a preferred embodiment of the present invention where triangle preprocessor unit 2, line preprocessor unit 2, and point preprocessor unit 2 are located the same unit 2. However, other in yet other embodiments, each respective unit can be implemented as a different unit.
  • input buffer 1 comprises a queue and a holding buffer.
  • the queue is approximately 32 entries deep by approximately 140 bytes wide. Input data packets from a subsequent process in pipeline 200, for example, sort 320, requiring more bits then the queue is wide, will be split into two groups and occupy two entries in the queue.
  • the queue is used to balance the different data rates between sort 320 (see FIG. 3) and setup 215.
  • the present invention contemplates that sort 320 and setup 215 cooperate if input queue 1 reaches capacity.
  • the holding buffer holds vertex information read from a triangle primitive embrace the triangle into the visible edges for line mode triangles.
  • Output buffer 10 is used by setup 215 to queue image data processed by setup 215 for delivery to a subsequent stage of pipeline 200, for example, cull 410.
  • FIG. 8 also illustrates the data flow between the functional units that implement the procedures of the present invention.
  • Setup starts with a set of vertices, (xO, yO, zO), (xl, yl, zl), and (x2, y2, z2).
  • Setup 215 assumes that the vertices of a filled triangle fall within a valid range of window coordinates, that is to say, that a triangle's coordinates have been clipped to the boundaries of the window. This procedure can be performed by a previous processing stage of pipeline 200, for example, geometry 310 (see FIG. 3).
  • triangle preprocessing unit 2 first generates unified primitive descriptors for each triangle that it receives.
  • the triangle preprocessor (1) sorts the three vertices in the y direction, to determine the top-most vertex (VtxYmax), middle vertex (either, VtxRightC or VtxLeftC), and bottom-most vertex (VtxYmin); (2) calculates the slopes and reciprocal slopes of the triangles three edges; (3) determines if the y-sorted triangle has a left comer (LeftCorner) or a right comer (RightCorner); (5) sorts the three vertices in the x-direction, to determine the right-most vertex (VtxXmax), middle vertex, and leftmost vertex (VtxXmin); and, (6) identifies the slopes that correspond to x-sorted Top (VtxTopC), Bottom (VtxBot
  • the present invention sorts the filled triangles vertices in the y-direction using, for example, the following three equations.
  • the time ordered vertices are V0, VI, and V2, where V0 is the oldest vertex, and V2 is the newest vertex.
  • Pointers are used by setup 215 to identify which time-ordered vertex corresponds to which Y-sorted vertex, including, top (VtxYmax), middle (VtxLeftC or VtxRightC), and bottom (VtxYmin). For example,
  • YsortTopSrc represents three bit encoding to identify which of the time ordered vertices is VtxYmax.
  • YsortMidSrc represents three bit encoding to identify which of the time ordered vertices is VtxYmid.
  • YsortBotSrc represents three bit encoding to identify which of the time ordered vertices is VtxYmin.
  • V ⁇ (X ⁇ , Y ⁇ , Z ⁇ )
  • V B (X B , Y B , Z B )
  • V M (X M , Y M , Z M ), where V ⁇ has the largest Y and V B has the smallest Y.
  • V ⁇ is VtxYmax
  • V B is VtxYmin
  • V M is VtxYmid.
  • Reciprocal slopes need to be mapped to labels corresponding to the y-sorted order, because V0, VI and V2 part-time ordered vertices.
  • S01, SI 2, and S20 are slopes of edges respectively between: (a) V0 and VI; (b) VI and V2; and, (c) V2 and V0. So after sorting the vertices with respect to y, we will have slopes between V ⁇ and V M , V ⁇ and V B , and V M abd V B . In light of this, pointers are determined accordingly.
  • V ⁇ and V M V ⁇ and V B , and
  • a preferred embodiment of the present invention maps the reciprocal slopes to the following labels: (a) YsortSTMSrc represents STM (V ⁇ and V M ) corresponds to which time ordered slope; (b) YsortSTBSrc represents STB (V ⁇ and V B ) corresponds to which time ordered slope; and, (c) YsortSMBSrc represents SMB (V M and V B ) corresponds to which time ordered slope.
  • //encoding is 3bits, "one-hot” ⁇ SI 2, SOI, S20 ⁇ . One hot means that only one bit can be a //"one.”
  • the indices refer to which bit is being referenced .
  • Whether the middle vertex is on the left or the right is determined by comparing the slopes dx2/dy of line formed by vertices v[i2] and v[il], and dxO/dy of the line formed by vertices v[i2] and v[i0]. If (dx2/dy > dxO/dy) then the middle vertex is to the right of the long edge else it is to the left of the long edge. The computed values are then assigned to the primitive descriptors. Assigning the x descriptors is similar. We thus have the edge slopes and vertex descriptors we need for the processing of triangles.
  • the indices sorted in ascending y-order are used to compute a set of (dx/dy) derivatives. And the indices sorted in ascending x-order used to compute the (dy/dx) derivatives for the edges.
  • the steps are (1) calculate time ordered slopes SOI, SI 2, and, S20; (2) map to y-sorted slope STM, SMB, and STB; and, (3) do a slope comparison to map slopes to SLEFT, SRIGHT, and SBOTTOM.
  • the slopes are calculated for the vertices in time order. That is, (X0, Y0) represents the first vertex, or "VO" received by setup 215, (XI, Yl) represents the second vertex, or "V2" received by setup 215, and (X2, Y2) represents the third vertex, or V3 received by setup 215.
  • a left slope is defined as slope of dy/dx where "left edge” is defined earlier.
  • a right slope is defined as slope of dy/dx where "right edge” is defined earlier.
  • a bottom slope is defined as the slope of dy/dx where the y-sorted "bottom edge” is defined earlier. (There is also an x-sorted bottom edge.)
  • de-referenced reciprocal slopes are significant because they represent the y-sorted slopes. That is to say that they identify slopes between y- sorted vertices.
  • FIG. 10 there is shown yet another illustration of slope assignments according to one embodiment of the present invention for triangles and line segments.
  • slope naming convention for purposes of simplifying this detailed description.
  • SlYmaxLeft represents the slope of the left edge - connecting the VtxYMax and VtxLeftC. If leftC is not valid then, SlYmaxLeft is the slope of the long edge.
  • the letter r in front indicates that the slope is reciprocal.
  • a reciprocal slope represents ( y/ x) instead of ( x y).
  • the slopes are represented as ⁇ SlYmaxLeft, SlYmaxRight, SlLeftYmin, SlRightYmin ⁇ and the inverse of slopes ( y/ x) ⁇ rSlXminTop, rSlXminBot, rSlTopXmax, rSLBotXmax ⁇ .
  • setup 215 compares the reciprocal slopes to determine the LeftC or RightC of a triangle.
  • YsortSNTM is greater than or equal to YsortSNTB
  • the triangle has a left comer, or "LeftC” and the following assignments can be made: (a) set LeftC equal to true (" 1 "); (b) set RightC equal to false ("0"); (c) set YsortSNLSrc equal to YsortSNTMSrc (identify pointer for left slope); (d) set YsortSNRSrc equal to YsortSNTB Src (identify pointer for right slope); and, (e) set YsortSNBSrc equal to YsortSNMBSrc (identify pointer bottom slope).
  • YsortSNTM is less than YsortSNTB
  • the triangle has a right comer, or "RightC” and the following assignments can be made: (a) set LeftC equal to false ("0"); (b) RightC equal to true ("1"); (c) YsortSNLSrc equal to YsortSNTBSrc (identify pointer for left slope); (d) sortSNRSrc equal to YsortSNTMSrc (identify pointer for right slope); and, (e) set YsortSNBSrc equal to YsortSNMBSrc (identify pointer bottom slope).
  • the calculations for sorting a triangle's vertices with respect to "y” also need to be repeated for the triangles vertices with respect to "x,” because an algorithm used in the clipping unit 5 (see FIG. 8) needs to know the sorted order of the vertices in the x direction.
  • the procedure for sorting a triangle's vertices with respect to "x” is analogous to the procedure's used above for sorting a triangle's vertices with respect to "y,” with the exception, of course, that the vertices are sorted with respect to "x,” not “y.”
  • the equations for sorting a triangles vertices with respect to "x" are provided below.
  • Pointers are used to identify which time-ordered vertex corresponds to which Y-sorted vertex.
  • pointers are used to identify the source (from the time- ordered (VO, VI and V2) to X-sorted ("destination" vertices VL, VR, and VM)).
  • source simply emphasizes that these are pointers to the data.
  • setup 215 identifies pointers to each destination (time-ordered to X- sorted).
  • XsortOdest ⁇ IXIGeXO & X0GeX2, IXIGeXO X0GeX2, XlGeXO & !X0GeX2 ⁇ .
  • Xsortldest ⁇ XlGeXO & !X2GeXl, XlGeXO !X2GeXl, IXIGeXO & X2GeXl ⁇ .
  • Xsort2dest ⁇ X2GeXl & !X0GeX2, X2GeXl !X0GeX2, !X2GeX0 & X0GeX2 ⁇ .
  • VM (XM, YM, ZM), where VR has the largest X and VL has the smallest X.
  • X sorted data has no ordering information available with respect to Y or Z.
  • one embodiment of the present invention determines pointers to identify the source of the slopes (from time ordered to x-sorted). For example, consider the following equations:
  • XsortSRMSrc ⁇ !Xsortldest[0] & !Xsort2dest[0], !Xsort0dest[0] & !Xsortldest[0], !Xsort2dest[0] & !Xsort0dest[0] ⁇ ;
  • XsortSRLSrc - ⁇ !Xsortldest[l] & !Xsort2dest[l], !Xsort0dest[l] & !Xsortldest[l], !Xsort2dest[l] & !Xsort0dest[l] ⁇ ; and, XsortSMLSrc - ⁇ !Xsortldest[2] & !Xsort2dest[2], !Xsort0dest[2] & !Xsortldest[2], !Xsort2dest[2] & !Xsort0dest[2] ⁇ , where, XsortSRMSrc represents the source (VO
  • XsortSRM slope between VR and VM
  • XsortSRL slope between VR and VL
  • XsortSML slope between VM and VL
  • the triangle has a BotC and the following assignments can be made: (a) set BotC equal to t e ("1"); (b) set TopC equal to false ("0"); (c) set XsortSBSrc equal to XsortSRMSrc (identify x-sorted bot slope); (d) set XsortSTSrc equal to XsortSRLSrc (identify x-sorted top slope); and, (e) set XsortSLSrc equal to XsortSMLSrc (identify x-sorted left slope).
  • topC top comer
  • the triangle has a top comer (TopCorner or TopC) and the following assignments can be made: (a) set BotC equal to false; (b) set TopC equal to tme; (c) set XsortSBSrc equal to XsortSRLSrc (identify x-sorted bot slope); (d) set XsortSTSrc equal to XsortSRMSrc (identify x-sorted top slope); and, (e) set XsortSLSrc equal to XsortSMLSrc (identify x-sorted left slope).
  • V0, VI, and V2 are time ordered vertices.
  • S01, SI 2, and S20 are time ordered slopes.
  • X-sorted VR, VL, and VM are x-sorted right, left and middle vertices.
  • X- sorted SRL, SRM, and SLM are slopes between the x-sorted vertices.
  • X-sorted ST, SB, and SL are respectively x-sorted top, bottom, and left vertices. BotC, if tme means that there is a bottom comer, likewise for TopC and top comer.
  • the object of line preprocessing unit 2 is to: (1) determine orientation of the line segment (a line segment's orientation includes, for example, the following: (a) a determination of whether the line is X-major or Y-major; (b) a determination of whether the line segment is pointed right or left (Xcnt); and, (c) a determination of whether the line segment is pointing up or down (Ycnt).), this is beneficial because Xcnt and Ycnt represent the direction of the line, which is needed for processing stippled line segments; and (2) calculating the slope of the line and reciprocal slope, this is beneficial because the slopes are used to calculate the tile intersection pointed also passed to cull 410 (see FIG. 4).
  • this unit of the present invention determines a line segment's orientation with respect to a corresponding tile of the 2-D window.
  • FIG. 11 there is shown an example of aspects of line orientation according to one embodiment of the present invention.
  • setup 215 for determining whether a line segment points to the right or pointing to the left.
  • DX01 X1-X0.
  • setup 215 sets XCnt equal to "up,” meaning that the line segment is pointing to the right. In a preferred embodiment of the present invention, “up” is represented by a “1,” and down is represented by a “0.” Otherwise, if DXOl is less than or equal to zero, setup 215 sets XCnt equal to down, that is to say that the line segment is pointing down. DXOl is the difference between XI and X0.
  • Ycnt up, that is to say that the line is pointing up.
  • Ycnt dn, that is to say that the line is pointing down.
  • setup 215 determines a line's reciprocal slope.
  • FIG. 12 illustrates aspects of line segment slopes.
  • Setup 215 now labels a line's slope according to the sign of the slope (S 01 ) and based on whether the line is aliased or not. For non-antialiased lines, setup 215 sets the slope of the ends of the lines to zero. (Infinite dx/dy is discussed in greater detail below).
  • S 01 is greater than or equal to 0: (a) the slope of the line's left edge (S L ) is set to equal S 01 ; (b) the reciprocal slope of the left edge (S ⁇ L ) is set to equal SN 01 ; (c) if the line is anti-aliased, setup 215 sets the slope of the line's right edge (S R ) to equal - SN 01 , and setup 215 sets the reciprocal slope of the right edge (SN R ) to equal -S 01 ; (d) if the line is not antialiased, the slope of the lines right edge, and the reciprocal slope of right edge is set to equal zero (infinite dx/dy); (e) LeftCorner, or LeftC is set to equal tme ("1"); and, (f) RightCorner, or RightC is set to equal tme.
  • Setup 215 receives edge flags in addition to window coordinates (x, y, z) co ⁇ esponding to the three triangle vertices. Referring to table 6, there is shown edge flags (LineFlags) 5, having edge flags. These edge flags 5 tell setup 215 which edges are to be drawn. Setup 215 also receives a "factor" (see table 6, factor (ApplyOffsetFactor) 4) used in the computation of polygon offset. This factor is factor "f ' and is used to offset the depth values in a primitive. Effectively, all depth values are to be offset by an amount equal to offset equals max [
  • setup 215 For each line polygon, setup 215: (1) computes the partial derivatives of z along x and y (note that these z gradients are for the triangle and are needed to compute the z offset for the triangle; these gradients do not need to be computed if factor is zero); (2) computes the polygon offset, if polygon offset computation is enabled, and adds the offset to the z value at each of the three vertices; (3) traverses the edges in order; if the edge is visible, then setup 215 draws the edge using line attributes such as the width and stipple (setup 215 processes one triangle edge at a time); (4) draw the line based on line attributes such as anti-aliased or aliased, stipple, width, and the like; and, (5) assign appropriate primitive code to the rectangle depending on which edge of the triangle it represents and send it to Cull 410.
  • a "primitive code” is an encoding of the primitive type, for example, 01 equals a triangle, 10 equals a line, and
  • stippled line processing utilizes "stipple information," and line orientation information (see section 5.2.5.2.1 Line Orientation) to reduce unnecessary processing by setup 215 of quads that lie outside of the current tile's boundaries.
  • stipple preprocessing breaks up a stippled line into multiple individual line segments.
  • Stipple information includes, for example, a stipple pattern (LineStipplePattem) 6 (see table 6), stipple repeat factor (LineStippleRepeatF actor) r 8, stipple start bit (StartLineStippleBitl and StartLineStippleBitl), for example stipple start bit 12, and stipple repeat start (for example, StartStippleRepeatFactorO) 23 (stplRepeatStart)).
  • LineStipplePattem LineStipplePattem
  • LineStippleRepeatF actor LineStippleRepeatF actor
  • stipple start bit StarttLineStippleBitl and StartLineStippleBitl
  • startStippleRepeatFactorO for example, StartStippleRepeatFactorO
  • Geometry 315 is responsible for computing the stipple start bit 12, and stipple repeat start 23 offsets at the beginning of each line segment.
  • quadrilateral vertex generation unit 4 (see FIG. 8) has provided us with the half width displacements.
  • Stippled Line Preprocessing will break up a stippled line segment into multiple individual line segments, with line lengths corresponding to sequences of 1 bits in a stipple pattern, starting at stplStart bit with a further repeat factor start at stplRepeatStart for the first bit.
  • stplRepeatStart 4
  • the quad line segment will have a length of 6.
  • depth gradients, line slopes, depth offsets, x-direction widths (xhw), and y-direction widths (yhw) are common to all stipple quads if a line segment, and therefore need to be generated only once.
  • Line segments are converted by Trigonometric Functions and Quadrilateral Generation Units, described in greater detail below (see sections 5.2.5.X and 5.2.5.X, respectively) into quadrilaterals, or "quads.”
  • quadrilaterals For antialiased lines the quads are rectangles.
  • non-antialiased lines the quads are parallelograms.
  • CY T 20 represents circle's 5 topmost point, clipped by tile's 15 top edge, in tile coordinates.
  • CY B 30 represents circle's 10 bottom most point, clipped by tile's 15 bottom edge, in tile coordinates .
  • Y offset 25 represents the distance between CY T 20 and CY B 30, the bottom of the unclipped circle 10.
  • X0 35 represents the "x" coordinate of the center 5 of circle 10, in window coordinates. This information is required and used by cull 410 to determine which sample points are covered by the point.
  • V 0 (X 0 , Y 0 , Z 0 ) (the center of the circle and the Zmin);
  • setup 215 converts all lines, including line triangles and points, into quadrilaterals.
  • the trigonometric function unit 3 see
  • FIG. 8 calculates a x-direction half-width and a y-direction half-width for each line and point. (Quadrilateral generation for filled triangles is discussed in greater detail above in section 5.4.1). Procedures for generating vertices for line and point quadrilaterals are discussed in greater detail below in section 5.4.5.
  • setup 215 determines the trigonometric functions cos ⁇ and sin ⁇ using the line's slope that was calculated in the line preprocessing functional unit described in great detail above. For example:
  • the above discussed trigonometric functions are calculated using lookup table and iteration method, similar to rsqrt and other complex math functions.
  • Rsqrt stands for the reciprocal square root.
  • FIG. 14 there is shown an example of the relationship between the orientation of a line and the sign of the resulting cos ⁇ and sin ⁇ . As is illustrated, the signs of the resulting cos ⁇ and sin ⁇ will depend on the orientation of the line.
  • setup 215 uses the above determined cos ⁇ and sin ⁇ to calculate a primitive's "x" direction half-width ("HWX”) and a primitive's "y” direction half width ( “HWY”).
  • HWX primitive's "x" direction half-width
  • HWY primitive's "y” direction half width
  • the line's half width is offset distance in the x and y directions from the center of the line to what will be a quadrilateral's edges.
  • the half width is equal to one-half of the point's width.
  • These half-width's are magnitudes, meaning that the x-direction half-widths and the y- direction half- width's are always positive. For purposes of illustration, refer to FIG.
  • each quadrilateral 1420, 1425 and 1430 has a width ("W"), for example, W 1408, W1413, and W 1418. In a prefe ⁇ ed embodiment of the present invention, this width "W" is contained in a primitive packet 6000 (see table 6). (Also, refer to FIG. 16, where there are shown examples of x-major and -major aliased lines in comparison to an antialiased line.).
  • setup 215 uses the following equations:
  • setup 215 uses the following equations:
  • setup 215 uses the following equations:
  • setup 215 uses the following equations:
  • Quadrilateral generation unit 4 (see FIG. 8): (1) generates a quadrilateral centered around a line or a point; and, (2) sorts a set of vertices for the quadrilateral with respect to a quadrilateral's top vertex, bottom vertex, left vertex, and right vertex. With respect to quadrilaterals, quadrilateral generation unit 4 converts anti-aliased lines into rectangles; (b) converts non-anti-aliased lines into parallelograms; and, (c) converts aliased points into squares centered around the point. (For filled triangles, the vertices are just passed through to the next functional unit, for example, clipping unit 5 (see FIG. 8)).
  • clipping unit 5 see FIG. 8
  • a quadrilateral's vertices are generated by taking into consideration: (a) a line segment's original vertices (a primitive's original vertices are sent to setup 215 in a primitive packet 6000, see table 6, WindowXO 19, WindowYO 20, WindowZO 21, WindowXl 14, WindowYl 15, WindowZl 16, WindowX2 9, Window Y2 10, and, WindowZ2 11); (b) a line segment's orientation (line orientation is determined and discussed in greater detail above in section 5.2.5.2.1); and, (c) a line segment's x-direction half-width and y-direction half-width (half-widths are calculated and discussed in greater detail above in section 5.2.5.4).
  • a quadrilateral vertices are generated by adding, or subtracting, a line segment's half-widths with respect to the line segment's original vertices.
  • setup 215 performs the following set of equations to determine a set of vertices defining a quadrilateral centered on the line segment:
  • quadrilateral vertices are a quadrilateral vertices.
  • the quadrilateral vertices are, as of yet un-sorted, but the equations were chosen, such that they can easily be sorted based on values of Ycnt and Xcnt.
  • FIG. 17 illustrating aspects of pre-sorted vertex assignments for quadrilaterals according to an embodiment of the present invention.
  • quadrilateral 1605 delineates a line segment that points right and up, having vertices QV0 1606, QV1 1607, QV2 1608, and QV3 1609.
  • setup 215 performs the following set of equations to determine set of vertices defining a quadrilateral centered on the line segment:
  • quadrilateral 1610 delineates a line segment that points left and up, having vertices QVO 1611, QVl 1612, QV2 1613, and QV3 1614.
  • setup 215 performs the following set of equations to determine a set of vertices defining a quadrilateral centered on the line segment:
  • quadrilateral 1615 delineates a line segment that points left and down, having vertices QVO 1616, QVl 1617, QV2 1618, and QV3 1619.
  • setup 215 performs the following set of equations to determine a set of vertices defining a quadrilateral centered on the line segment:
  • QY0 Y0-HWY
  • QX0 X0 - HWX
  • quadrilateral 1620 delineates a line segment that points right and down, having vertices QVO 1621, QVl 1622, QV2 1623, and QV3 1624.
  • a vertical line segment is treated as the line segment is pointing to the left and top.
  • a horizontal line segment is treated as if it is pointing right and up.
  • quadrilateral generation functional unit 4 uses the following logic.
  • top and bottom vertices are assigned according to the following equations: (a) vertices (QXT, QYT, QZT) are set to respectively equal (QX3, QY3, Zl); and, (b) vertices (QXB, QYB, QZB) are set to respectively equal (QXO, QYO, Z0).
  • top and bottom vertices are assigned according to the following equations: (a) vertices (QXT, QYT, QZT) are set to respectively equal (QXO, QYO, Z0); and, (b) vertices (QXB, QYB, QZB) are set to respectively equal (QX3, QY3, Zl).
  • vertices QXL, QYL, QZL
  • QX1, QY1, Z0 vertices (QXR, QYR, QZR) are set to respectively equal (QX2, QY2, Zl).
  • clipping a polygon to a tile can be defined as finding the area of intersection between a polygon and a tile.
  • the clip points are the vertices of this area of intersection.
  • clipping unit 5 To find a tight bounding box that encloses parts of a primitive that intersect a particular tile, and to facilitate a subsequent determination of the primitive's minimum depth value (Zmin), clipping unit 5 (see FIG.
  • each edge of a tile (1) selects a tile edge from a tile (each tile has four edges), to determine which, if any of a quadrilateral's edges, or three triangle edges, cross the tile edge; (b) checks a clip codes (discussed in greater detail below) with respect to the selected edge; (c) computes the two intersection points (if any) of a quad edge or a triangle edge with the selected tile edge; (d) compare computed intersection points to tile boundaries to determine validity and updates the clip points if appropriate.
  • a clip codes discussed in greater detail below
  • the "current tile” is the tile cu ⁇ ently being set up for cull 410 by setup 215.
  • a previous stage of pipeline 200 sorts each primitive in a frame with respect to those regions, or tiles of a window (the window is divided into multiple tiles) that are touched by the primitive. These primitives were sent in a tile-by-tile order to setup 215.
  • setup 215 can select an edge in an arbitrary manner, as long as each edge is eventually selected. For example, in one embodiment of clipping unit 5 can first select a tile's top edge, next the tile's right edge, next the tile's bottom edge, and finally the tiles left edge. In yet another embodiment of clipping unit 5, the tile edges may be selected in a different order.
  • Sort 320 provides setup 215 the x-coordinate (TileXLocation) for the current tile's left tile edge, and the y-coordinate (TileXLocation) for the bottom right tile edge via a begin tile packet (see table 2).
  • the tile's x-coordinate is refe ⁇ ed to as "tile x”
  • the tiles y-coordinate is refe ⁇ ed to as "tile y.”
  • clipping unit 5 sets the left edge of tile equal to tile x, which means that left tile edge x- coordinate is equal to tile x + 0.
  • the cu ⁇ ent tile's right edge is set to equal the tiles left edge plus the width of the tile.
  • the cu ⁇ ent tile's bottom edges set to equal tile y, which means that this y-coordinate is equal to tile y + 0.
  • the tile's top edge is set to equal and the bottom tile edge plus the height of the tile in pixels.
  • the width and height of a tile is 16 pixels.
  • the dimensions of the tile can be any convenient size.
  • Clip codes are used to determine which edges of a polygon, if any, touch the cu ⁇ ent tile. (A previous stage of pipeline 200 has sorted each primitive with respect to those tiles of a 2-D window that each respective primitive touches.).
  • clip codes are Boolean values, wherein "0" represents false and "1" represents tme.
  • a clip code value of false indicates that a primitive does not need to be clipped with respect to the edge of the cu ⁇ ent tile that particular clip code represents.
  • a value of tme indicates that a primitive does need to be clipped with respect to the edge of the cu ⁇ ent tile that that particular clip code represents.
  • clip codes are obtained as follows for each of a primitives vertices.
  • C[i] ((v[i].y > tile_ymax) « 3)
  • clip codes are obtained using the following set of equations: (1) in case of quads then use the following mapping, where "Q" represents a quadrilaterals respective coordinates, and TileRht, TileLft, TileTop and TileBot respectively represent the x-coordinate of a right tile edge, the x-coordinate of a left tile edge, the y-coordinate of a top tile edge, and the y- coordinate of a bottom tile edge.
  • ClpFlag[3] for triangles is don't care.
  • ClpFlagL[l] asserted means that vertex 1 is clipped by the left edge of the tile (the vertices have already been sorted by the quad generation unit 4, see FIG. 8 ).
  • ClpFlagR[2] asserted means that vertex2 is clipped by right edge of tile, and the like. Here are "clipped" means that the vertex lies outside of the tile.
  • clipping unit 5 clips the primitive to the tile by determining the values of nine possible clipping points.
  • a clipping point is a vertex of a new polygon formed by clipping (finding area of intersection) the initial polygon by the boundaries of the cu ⁇ ent tile.
  • There are nine possible clipping points because there are eight distinct locations were a polygon might intersect a tile's edge. For triangles only, there is an internal clipping point which equals y-sorted VtxMid. Of these nine possible clipping points, at most, eight of them can be valid at any one time.
  • the following acronyms are adopted to represent each respective clipping point: (1) clipping on the top tile edge yields left (PTL) and right (PTR) clip vertices;
  • Clipping unit 5 now validates each of the computed clipping points, making sure that the coordinates of each clipping point are within the coordinate space of the cu ⁇ ent tile. For example, points that intersect the top tile edge may be such that they are both to the left of the tile. In this case, the intersection points are marked invalid.
  • each clip point has an x- coordinate, a y-coordinate, and a one bit valid flag. Setting the flag to "0" indicates that the x-coordinate and the y-coordinate are not valid. If the intersection with the edge is such that one or both off a tile's edge comers (such comers were discussed in greater detail above in section are included in the intersection, then newly generated intersection points are valid.
  • a primitive is discarded if none of its clipping points are found to be valid.
  • the pseudo-code for an algorithm for determining clipping points according to one embodiment of the present invention is illustrated below:
  • ClpFlagL [XsortMidSrc,XsortRhtSrc,XsortLftSrc,XsortMidSrc], where indices of clip flags 3:0 refe ⁇ ed to vertices. In particular. 0 represents bottom; 1 represents left; 2 represents right; and 3 represents top.
  • ClipFlagL[2] refers to time order vertex 2 is clipped by left edge.
  • XsortClipFlagL[2] refers to right most vertex.
  • ClipYRT (intYRT > TileTop) ? TileTop :
  • ValidClipBot ValidXBL & ValidXBR
  • ClipXTL (intXTL ⁇ TileLeft) ? TileLeft :
  • ValidClipTop ValidXTL & ValidXTR
  • the 8 clipping points identifed so far can identify points clipped by the edge of the tile and also extreme vertices (ie topmost, bottommost, leftmost or rightmost) that are inside of the tile.
  • One more clipping point is needed to identify a vertex that is inside the tile but is not at an extremity of the polygon (ie the vertex called VM)
  • ClipM XsortMidSrc- mux(ClipO, Clipl, Clip2)
  • ValidClipI ! (ClpFlgL[YsortMidSrc]) & ! (ClpFlgR[ YsortMidSrc])
  • CullXTL and CullXTR are the X intercepts of the polygon with the line of the top edge of the tile. They are different from the PTL and PTR in that PTL and PTR must be within or at the tile boundaries, while CullXTL and CullXTR may be right or left of the tile boundaries. If YT lies below the top edge of the tile then
  • CullXTL-CullXTR XT.
  • CullYTLR the Y coordinate shared by CullXTL and CullXTR
  • VtxRht (quad) ?P2 : YsortMidSrc ⁇ mux(P0, P 1 , P2)
  • VtxLft (quad) ?P1 : YsortMidSrc ⁇ mux(P0, PI, P2)
  • CullSR, CullSL, CullSB cvt (YsortSNR, YsortSNL, YsortSNB)
  • Setup 215 will pass the following values to cull 410: (1) If tRight.x is right of the window range then clamp to right window edge; (2) If tLeft.x is left of window range then clamp to left window edge; (3) If v[ VtxRightC]. x is right of window range then send vertex rLow (that is, lower clip point on the right tile edge as the right comer); and, (4) If v[ VtxLeftC]. x is left of window range then send lLow (that is, the lower clip point on the left tile edge as the left comer). This is illustrated in FIG.
  • FIG. 22 illustrates aspects of clip code vertex assignment.
  • the bounding box is the smallest box that can be drawn around the clipped polygon.
  • the bounding box of the primitive intersection is determined by examining the clipped vertices (clipped vertices, or clipping points are described in greater detail above). We use these points to compute dimensions for a bounding box.
  • the dimensions of of the bounding box are identified by BXL (the left most of valid clip points), BXR (the right most of valid clip points), BYT (the top most of valid clip points), BYB (the bottom most of valid clip points) in stamps.
  • stamp refers to the resolution we want to determine the bounding box to.
  • setup 215 identifies the smallest Y (the bottom most y-coordinate of a clip polygon). This smallest Y is required by cull 410 for its edge walking algorithm.
  • the valid flags for the clip points are as follows: ValidClipL (needs that clip points PLT and PLB are valid), ValidClipR, ValidClipT, and ValidClipB, co ⁇ espond to the clip codes described in greater detail above in reference to clipping unit 5 (see FIG. 8).
  • PLT refers to "point left, top.”
  • PLT and (ClipXL, ClipyLT) are the same.
  • BXLtemp min valid(ClipXTL, ClipXBL); BXL - ValidClipL ? ClipXL : BXLtemp;
  • BYTtemp max valid(ClipYLT, Clip YRT); BYT - ValidClipT ? ClipYT : BYTtemp;
  • Screen relative coordinates can describe a 2048 by 2048 pixel screen.
  • tiles are only 16 by 16 pixels in size.
  • Converting from screen coordinates to tile relative coordinates is simply to ignore (or truncated) the most significant bits. To illustrate this, consider the example: it takes 11 bits to describe 2048 pixels, whereas it takes only 4 bits to describe 16 pixels. discarding the top 7 bits will yield a tile relative value.
  • BYT trunc(BYT - 1 subpixel)stamp
  • BYB trunc(BYB)stamp
  • BXL trunc(BXL)stamp
  • BXR trunc(BXR - 1 subpixel)stamp.
  • the input vertices are the time-ordered triangle vertices (XO, Y0, Z0), (XI, Yl, Zl), (X2, Y2, Z2).
  • the input vertices are 3 of the quad vertices produced by Quad Gen (QXB, QYB, ZB), (QXL, QYL, ZL), (QXR, QYR, ZR).
  • the Z partials are calculated once (for the original line) and saved and reused for each stippled line segment.
  • the vertices are first sorted before being inserted in to the equation to calculate depth gradients.
  • the sorting information is was obtained in the triangle preprocessing unit described in greater detail above. (The information is contained in the pointers YsortTopSrc, YsortMidSrc, and YsortBotSrc).
  • the vertices are already sorted by Quadrilateral Generation unit 4 described in greater detail above. Note: Sorting the vertices is desirable so that changing the input vertex ordering will not change the results.
  • pseudocode for sorting the vertices pseudocode for sorting the vertices:
  • the partial derivatives represent the depth gradient for the polygon. They are given by the following equation:
  • M max(
  • Factor is a parameter supplied by the user;
  • Res is a constant; and, Units is a parameter supplied by the user.
  • the multiply by 8 is required to maintain the units.
  • the depth offset will be added to the Z values when they are computed for Zmin and Zref later. In case of line mode triangles, the depth offset is calculated once and saved and applied to each of the subsequent triangle edges. 5.4.8.2.1 Determine X major for triangles
  • Z values are computed using an "edge- walking” algorithm. This algorithm requires information regarding the orientation of the triangle, which is determined here.
  • Xmajor value of Xmajor as determined for lines in the TLP subunit.
  • An x-major line is defined in OpenGL® specification. In setup 215, an x- major line is determined early, but conceptually may be determined anywhere it is convenient.
  • Z min and Z ref SubUnit are the ZslopeMjr (Z derivative along the major edge), and ZslopeMnr (the Z gradient along the minor axis).
  • Ymajor triangles it is the edge connecting the Topmost and Bottommost vertices.
  • Lines it is the axis of the line. Note that although, we often refer to the Major edge as the "long edge” it is not necessarily the longest edge. It is the edge that spans the greatest distance along either the x or y dimension; and, (d) Minor Axis: If the triangle or line is Xmajor, then the the minor axis is the y axis. If the triangle or line is Ymajor, then the minor axis is the x axis. To compute ZslopeMjr and ZslopeMnr: If Xmajor Triangle:
  • ZslopeMjr (QZR - QZB) / (QXR - QXB)
  • ZslopeMnr ZY
  • ZslopeMjr (QZL - QZB) / (QXL - QXB)
  • ZslopeMjr (QZR - QZB) / (QYR - QYB)
  • ZslopeMnr ZX
  • ZslopeMjr (QZL - QZB) / (QYL - QYB)
  • ZslopeMnr ZX
  • cull 410 has a fixed point datapath that is capable of handling Dz/Dx and Dz/Dy of no wider than 35b. These 35b are used to specify a value that is designated T27.7 ( a two's complement number that has a magnitude of 27 integer bits and 7 fractional bits) Hence, the magnitude of the depth gradients must be less than 2 ⁇ 27.
  • GRMAX is the threshold for the largest allowable depth gradient (it is set via the auxiliary ring — determined and set via software executing on, for example, computer 101— see FIG. 1:
  • Edge-on triangles are detected in depth gradient unit 7 (see FIG. 8). Whenever the Dz/Dx or Dz/Dy is infinite (overflows) the triangle is invalidated. However, edge-on line mode triangles are not discarded. Each of the visible edges are to be rendered. In a prefe ⁇ ed embodiment of the present invention the depth offset (if turned on) for such a triangle will however overflow, and be clamped to +/- 2 ⁇ 24.
  • An infinite dx/dy implies that an edge is perfectly horizontal. In the case of horizontal edges, one of the two end-points has got to be a comer vertex (VtxLeftC or VtxRightC). With a primitive whose coordinates lie within the window range, Cull 410 (see FIG. 4) will not make use of an infinite slope. This is because with Cull's 410 edge walking algorithm, it will be able to tell from the y value of the left and/or right comer vertices that it has turned a comer and that it will not need to walk along the horizontal edge at all. In this case, Cull's 410 edge walking will need a slope.
  • any X that edge walking calculates with a co ⁇ ectly signed slope will cause an overflow (or underflow) and X will simply be clamped back to the window edge. So it is actually unimportant what value of slope it uses as long as it is of the co ⁇ ect sign.
  • a value of infinity is also a don't care for setup's 215 own usage of slopes.
  • Setup uses slopes to calculate intercepts of primitive edges with tile edges.
  • a dx dy of infinity necessarily implies a _Y of zero. If the implementation is such that zero plus any number equals zero, then dx dy is a don't care.
  • Setup 215 calculates slopes internally in floating point format. The floating point units will assert an infinity flag should an infinite result occur. Because Setup doesn't care about infinite slopes, and Cull 410 doesn't care about the magnitude of infinite slopes, but does care about the sign, setup 215 doesn't need to express infinity. To save the trouble of determining the correct sign, setup 215 forces an infinite slope to ZERO before it passes it onto Cull 410.
  • the object of this subunit is to: (a) select the 3 possible locations where the minimum Z value may be; (b) calculate the Z's at these 3 points, applying a co ⁇ ection bias if needed; (c) sSelect he minimum Z value of the polygon within the tile; (d) use the stamp center nearest the location of the minimum Z value as the reference stamp location; (e) compute the Zref value; and, (f) apply the Z offset value.
  • ClipTL (ClipXTL, ClipYT, ValidClipT)
  • ClipLT (ClipXL, YLT, ValidClipL) , etc
  • Line (1) represents the change in Z as you walk along the long edge down to the appropriate Y coordinate.
  • Line (2) is the change in Z as you walk in from the long edge to the destination X coordinate.
  • a co ⁇ ection to the zmin value may need to be applied if the xminO or yminO is equal to a tile edge. Because of the limited precision math units used, the value of intercepts (computed above while calculating intersections and determining clipping points) have an e ⁇ or less than +/- 1/16 of a pixel. To guarantee then that we compute a Zmin that is less than what would be the infinitely precise Zmin, we apply a Bias to the zmin that we compute here.
  • the mimmum valid value of the three Zmin candidates is the Tile's Zmin.
  • the stamp whose center is nearest the location of the Zmin is the reference stamp.
  • the pseudocode for selecting the Zmin is as follows:
  • ZminTmp (Zminl ⁇ ZminO) & Zminl Valid
  • ZminTmp Valid (Zminl ⁇ ZminO) & Zminl Valid
  • '.ZminOValid ? ZminlValid : ZminOValid; and, Zmin (ZminTmp ⁇ Zmin2) & ZminTmp Valid
  • the x and y coordinates co ⁇ esponding to each ZminO, Zminl and Zmin2 are also sorted in parallel along with the determination of Zmin. So when Zmin is determined, there is also a co ⁇ esponding xmin and ymin.
  • Setup passes a single Z value, representing the Z value at a specific point within the primitive.
  • Setup chooses a reference stamp that contains the vertex with the minimum z.
  • the reference stamp is the stamp the center is closest to the location of Zmin has determined in section 5.4.9.3. (Coordinates are called xmin, ymin.). That stamp center is found by truncating the xmin and ymin values to the nearest even value. For vertices on the right edge, the x-coordinates are decremented and for the top edge the y-coordinate is decremented before the reference stamp is computed to ensure choosing a stamp center that is within tile boundaries.
  • the reference Z value, "Zref ' is calculated at the center of the reference stamp.
  • Setup 215 identifies the reference stamp with a pair of 3 bit values, xRefStamp and yRefStamp, that specify its location in the Tile.
  • the reference stamp is identified as an offset in stamps from the comer of the Tile.
  • the reference stamp must touch the clipped polygon. To ensure this, choose the center of stamp nearest the location of the Zmin to be the reference stamp. In the Zmin selection and sorting, keep track of the vertex coordinates that were ultimately chosen. Call this point (Xmin, Ymin).
  • Zref + (Yref- Ytop) * ZslopeMjr + (Xref- ((Yref - Ytop) * DX/Dylong + Xtop)) * ZslopeMnr (note that Ztop and offset are NOT yet added).
  • Sort 320 sends screen relative values to setup 215.
  • Setup 215 does most calculations in screen relative space.
  • Setup 215 then converts results to tile relative space for cull 410.
  • Cull 410 culls primitives using these coordinates.
  • the present invention is a tiled architecture. Both this invention and the mid-pipeline cull unit 410 is novel.
  • Cull 410 requires a new type of information that is not calculated by conventional setup units. For example, consider the last 21 elements in setup output primitive packet 6000 (see table 6). Some of these elements are tile relative which helps efficiency of subsequent processing stages of pipeline 200.
  • Block3DPipe 1 0 SW BKE
  • TileXLocation 7 4 SRT STP,CUL,PIX,BKE
  • BackendClearDepth 1 24 SRT CUL, PIX, BKE
  • P ⁇ mType 119 STP CUL ColorType STP creates unified packets for t ⁇ angles and lines But they may have different aliasing state So CUL needs to know whether the packet is point, line, or t ⁇ angle

Abstract

The present invention provides a post tile sorting setup in a tiled graphics pipeline architecture (200). In particular, the present invention determines a set of clipping points that identify intersections of a primitive with a tile. The mid-pipeline setup (215) unit is adapted to compute a minimum depth value for that part of the primitive intersecting the tile. The mid-pipeline setup unit can be adapted to process primitives with x-coordinates that are screen based and y-coordinates that are tile based. Additionally, to the mid-pipeline setup unit is adapted to represent both line segments and triangles as quadrilaterals, wherein not all of a quadrilateral's vertices are required to describe a triangle.

Description

APPARATUS AND METHOD FOR PERFORMING SETUP OPERATIONS IN A 3-D GRAPHICS PIPELINE USING UNIFIED PRIMITIVE DESCRIPTORS
Inventors:
Jerome F. Duluk Jr., Richard E. Hessel, Vaughn T. Arnold, Jack Benkual, George Cuan, Steven L. Dodgen, Emerson S. Fang, Hengwei Hsu, and Sushma S. Trivedi.
Related Applications
This application claims the benefit under 35 USC Section 119(e) of U.S.
Provisional Patent Application Serial No. 60/097,336 filed 20 August 1998 and entitled GRAPHICS PROCESSOR WITH DEFERRED SHADING; which is hereby incorporated by reference. This application also claims the benefit under 35 USC Section 120 of U.S.
Patent Application Serial No. 09,213,990 filed December 17, 1998 entitled HOW TO
DO TANGENT SPACE LIGHTING IN A DEFERRED SHADING
ARCHITECTURE (Atty. Doc. No. A-66397);
Serial No , filed , entitled SYSTEM, APARATUS AND METHOD FOR SPATIALLY SORTING IMAGE DATA IN A
THREE-DIMENSIONAL GRAPHICS PIPELINE (Atty. Doc. No. A-66380); Serial No , filed , entitled GRAPHICS
PROCESSOR WITH PIPELINE STATE STORAGE AND RETRIEVAL (Atty. Doc.
No. A-66378); Serial No , filed , entitled METHOD AND
APPARATUS FOR GENERATING TEXTURE (Atty. Doc. No. A-66398); Serial No , filed , entitled APPARATUS AND
METHOD FOR GEOMETRY OPERATIONS IN A 3D GRAPHICS PIPELINE (Atty. Doc. No. A-66373);
Serial No , filed , entitled APPARATUS AND METHOD FOR FRAGMENT OPERATIONS IN A 3D GRAPHICS PIPELINE (Atty. Doc. No. A-66399); and
Serial No , filed , entitled DEFERRED SHADING
GRAPHICS PIPELINE PROCESSOR (Atty. Doc. No. A-66360).
Serial No , filed , entitled METHOD AND APPARATUS FOR PERFORMING CONSERVATIVE HIDDEN SURFACE REMOVAL IN A GRAPHICS PROCESSOR WITH DEFERRED SHADING (Attorney Doc. No. A-66386);
Serial No , filed , entitled DEFERRED SHADING
GRAPHICS PIPELINE PROCESSOR HAVING ADVANCED FEATURES (Atty. Doc. No. A-66364).
1. Field of the Invention
The present invention relates generally to computer structure and method for processing three-dimensional ("3-D") computer graphics in a 3-D graphics processor. More particularly, the present invention is directed to a computer structure and method for performing setup operations in a tiled graphics pipeline architecture using unified primitive descriptors, post tile sorting setup, and tile relative x-values and screen relative y-values.
2. Background of the Invention
The art and science of three-dimensional ("3-D") computer graphics concerns the generation, or rendering, of two-dimensional ("2-D") images of 3-D objects for display or presentation onto a display device or monitor, such as a Cathode Ray Tube or a Liquid Crystal Display. The object may be a simple geometry primitive such as a point, a line segment, or a polygon. More complex objects can be rendered onto a display device by representing the objects with a series of connected planar polygons, such as, for example, by representing the objects as a series of connected planar triangles. All geometry primitives may eventually be described in terms of one vertex or a set of vertices, for example, coordinate (x, y, z) that defines a point, for example, the endpoint of a line segment, or a corner of a polygon.
To generate a data set for display as a 2-D projection representative of a 3-D primitive onto a computer monitor or other display device, the vertices of the primitive must be processed through a series of operations, or processing stages in a graphics rendering pipeline. A generic pipeline is merely a series of cascading processing units, or stages wherein the output from a prior stage, serves as the input for a subsequent stage. In the context of a graphics processor, these stages include, for example, per- vertex operations, primitive assembly operations, pixel operations, texture assembly operations, rasterization operations, and fragment operations.
The details of the various processing stages, except where otherwise noted, are not necessary to practice the present invention, and for that reason, will not be discussed in greater detail herein. A summary of the common processing stages in a conventional rendering pipeline can be found in the following standard reference: "Fundamentals of Three-dimensional Computer Graphics", by Watt, Chapter 5: The Rendering Process, pages 97 to 113, published by Addison- Wesley Publishing Company, Reading, Massachusetts, 1989, reprinted 1991, ISBN 0-201-15442-0, which is hereby incorporated by reference for background purposes only.
Very few conventional graphics pipelines have tiled architectures. A tiled architecture is a graphic pipeline architecture that associates image data, and in particular geometry primitives, with regions in a 2-D window, where the 2-D window is divided into multiple equally size regions. Tiled architectures are beneficial because they allow a graphics pipeline to efficiently operate on smaller amounts of image data. In other words, a tiled graphics pipeline architecture presents an opportunity to utilize specialized, higher performance graphics hardware into the graphic pipeline.
Those graphics pipelines that do have tiled architectures do not perform mid- pipeline sorting of the image data with respect to the regions of the 2-D window. Conventional graphics pipelines typically sort image data either, in software at the beginning of a graphics pipelines, before any image data transformations have taken place, or in hardware the very end of the graphics pipeline, after rendering the image into a 2-D grid of pixels.
Significant problems are presented by sorting image data at the very beginning of the graphics pipelines. For example, sorting image data at the very beginning of the graphics pipelines, typically involves dividing intersecting primitives into smaller primitives where the primitives intersect, and thereby, creating more vertices. It is necessary for each of these vertices to be transformed into an appropriate coordinate space. Typically this is done by subsequent stage of the graphics pipeline. Vertex transformation is computationally intensive. Because none of these vertices have yet been transformed into an appropriate coordinate space, each of these vertices will need to be transformed by a subsequent vertex transformation stage of the graphics pipeline into the appropriate coordinates space. Coordinate spaces are known. As noted above, vertex transformation is computationally intensive. Increasing the number of vertices by subdividing primitives before transformation, slows down the already slow vertex transformation process.
Significant problems are also presented by spatially sorting image data at the end of a graphics pipeline (in hardware). For example, sorting image data at the end of a graphic pipeline typically slows image processing down, because such an implementation typically "texture maps" and rasterizes image data that will never be displayed. To illustrate this, consider the following example, where a first piece of geometry is spatially located behind a second piece of opaque geometry. In this illustration, the first piece of geometry is occluded by the second piece of opaque geometry. Therefore, the first piece of geometry will never be displayed. To facilitate the removal of occluded primitives, an additional value (beyond color) is typically maintained for each bitmap pixel of an image. This additional value is typically known as a z-value (also known as a "depth value"). The z-value is a measure of the distance from the eyepoint to the point on the object represented by the pixel with which the z-value corresponds. Removing primitives or parts of primitives that are occluded by other geometry is beneficial because it optimizes a graphic pipeline by processing only those image data that will be visible. The process of removing hidden image data is called culling.
Those tiled graphics pipelines that do have tiled architectures do not perform culling operations. Because, as discussed in greater detail above, it is desirable to sort image data mid-pipeline, after image data coordinate transformations have taken place, and before the image data has been texture mapped and/or rasterized, it is also desirable to remove hidden pixels from the image data before the image data has been texture mapped and/or rasterized. Therefore, what is also needed is a tiled graphics pipeline architecture that performs not only, mid-pipeline sorting, but mid-pipeline culling.
In a tile based graphics pipeline architecture, it is desirable to provide a culling unit with accurate image data information on a tile relative basis. Such image data information includes, for example, providing the culling unit those vertices defining the intersection of a primitive with a tile's edges. To accomplish this, the image data must be clipped to a tile. This information should be sent to the mid-pipeline culling unit. Therefore, because a mid-pipeline cull unit is novel and its input requirements are unique, what is also needed, is a structure and method for a mid-pipeline host file sorting setup unit for setting up image data information for the mid-pipeline culling unit.
It is desirable that the logic in a mid-pipeline culling unit in a tiled graphics pipeline architecture be as high performance and streamlined as possible. The logic in a culling unit can be optimized for high performance by reducing the number of branches in its logical operations. For example, conventional culling operations typically include logic, or algorithms to determine which of a primitive's vertices lie within a tile, hereinafter referred to as a vertex/tile intersection algorithm. Conventional culling operations typically implement a number of different vertices/tile intersection algorithms to accomplish this, one algorithm for each primitive type. A beneficial aspect of needing only one such algorithm to determine whether a line segment's or a triangle's vertices lie within a tile, as compared requiring two such algorithms (one for each primitive type), is that total number of branches in logic implementing such vertex/tile intersection algorithms are reduced. In other words, one set of algorithms/set of equations/set of hardware could be used to perform the vertex/tile intersection algorithm for a number of different primitive types. In light of this, it would be advantageous to have a procedure for representing different primitives, such as, for example, a line segment and a triangle, as a single primitive type, while still retaining each respective primitive type's unique geometric information. In this manner, the logic in a mid-pipeline culling unit in a tiled graphics pipeline architecture could be streamlined.
Other stages of a graphics pipeline, besides a culling unit, could also benefit in a similar manner from a procedure for representing different primitives as a single primitive type, while still retaining each respective primitive type unique geometric - 0 - information. For example, a processing stage that sets up information for a culling unit could also share a set of algorithms/set of equations/set of hardware for calculating different primitive information.
In conventional tile based graphics pipeline architectures, geometry primitive vertices, or x-coordinates and y-coordinates, are typically passed between pipeline stages in screen based coordinates. Typically x-coordinates and y-coordinates are represented as integers having a limited number of fractional bits (sub pixel bits).
Because it is desirable to architect a tile based graphics pipeline architecture to be as streamlined as possible, it would be beneficial to represent x-coordinates and y- coordinates in with a smaller number of bits to reduce the amount of data being sent to a subsequent stage of the graphics pipeline. Therefore, what is needed is a structure and method for representing x-coordinates and y-coordinates in a tile based graphics pipeline architecture, such the number of bits required to pass vertice information to subsequent stages of the graphics pipeline is reduced.
3 Summary of the Invention
Heretofore, tile based graphics pipeline architectures have been limited by sorting image data either prior to the graphics pipeline or in hardware at the end of the graphics pipeline, no tile based graphics pipeline architecture culling units, no mid- pipeline post tile sorting setup units to support tile based culling operations, and larger vertices memory storage requirements.
The present invention overcomes the limitations of the state-of-the-art by providing structure and method in a tile based graphics pipeline architecture for: (a) a mid-pipeline post tile sorting setup unit that supplies a mid-pipeline cull unit with tile relative image data information; (b) a unified primitive descriptor language for representing triangles and line segments as quadrilaterals and thereby reducing logic branching requirements of a mid-pipeline culling unit; and, (c) representing each of a primitive's vertices in tile relative y-values and screen relative x-values, and thereby reducing the number of bits that need to be passed to subsequent stages of the graphics pipeline accurately, and efficiently represent a primitive's vertices.
In summary, a mid-pipeline setup unit is one processing stage of a tile based 3- D graphics pipeline. The mid-pipeline setup unit processes image data in preparation for a subsequent mid-pipeline culling unit. A mid-pipeline sorting unit, previous to the mid-pipeline setup unit has already sorted the image data with respect to multiple tiles comprising a 2-D window. The image data including vertices describing a primitive.
In particular, the mid-pipeline setup unit is adapted to determine a set of clipping points that identify an intersection of the primitive with the tile, and also adapted to compute a minimum depth value for that part of the primitive intersecting the tile.
In yet another embodiment of the present invention the primitives x- coordinates are screen based and the y-coordinates are tile based.
In yet another embodiment of the present invention, the mid-pipeline setup unit is adapted to represent line segments and triangles as rectangles. Both line segments and triangles in this embodiment are described with respective sets of four vertices. In the case of triangles, not all of the vertices are needed to describe the triangle, one vertice will be will be degenerate, or not described.
4 Brief Description of the Drawings
Additional objects and features of the invention will be more readily apparent from the following detailed description and appended claims when taken in conjunction with the drawings, in which:
FIG. 1 is a block diagram illustrate aspects of a system according to an embodiment of the present invention, for performing setup operations in a 3-D graphics pipeline using unified primitive descriptors, post tile sorting setup, tile relative y-values, and screen relative x-values;
FIG. 2 is a block diagram illustrating aspects of a graphics processor according to an embodiment of the present invention, for performing setup operations in a 3-D graphics pipeline using unified primitive descriptors, post tile sorting setup, tile relative y-values, and screen relative x-values;
FIG. 3 is a block diagram illustrating other processing stages 210 of graphics pipeline 200 according to a preferred embodiment of the present invention;
FIG. 4 is a block diagram illustrate Other Processing Stages 220 of graphics pipeline 200 according to a preferred embodiment of the present invention; FIG. 5 illustrates vertex assignments according to a uniform primitive description according to one embodiment of the present invention, for describing polygons with an inventive descriptive syntax; FIG. 8 illustrates a block diagram of functional units of setup 2155 according to an embodiment of the present invention, the functional units implementing the methodology of the present invention;
FIG. 9 illustrates use of triangle slope assignments according to an embodiment of the present invention;
FIG. 10 illustrates slope assignments for triangles and line segments according to an embodiment of the present invention;
FIG. 11 illustrates aspects of line segments orientation according to an embodiment of the present invention; FIG. 12 illustrates aspects of line segments slopes according to an embodiment of the present invention;
FIG. 13 illustrates aspects of point preprocessing according to an embodiment of the present invention;
FIG. 14 illustrates the relationship of trigonometric functions to line segment orientations;
FIG. 15 illustrates aspects of line segment quadrilateral generation according to embodiment of the present invention;
FIG. 16 illustrates examples of x-major and y-major line orientation with respect to aliased and anti-aliased lines according to an embodiment of the present invention;
FIG. 17 illustrates presorted vertex assignments for quadrilaterals;
FIG. 18 illustrates a primitives clipping points with respect to the primitives intersection with a tile;
FIG. 19 illustrates aspects of processing quadrilateral vertices that lie outside of a 2-D window according to and embodiment of the present mention;
FIG. 20 illustrates an example of a triangle's minimum depth value vertex candidates according to embodiment of the present invention;
FIG. 21 illustrates examples of quadrilaterals having vertices that lie outside of a 2-D window range; FIG. 22 illustrates aspects of clip code vertex assignment according to embodiment of the present invention; and,
FIG. 23 illustrates aspects of unified primitive descriptor assignments, including corner flags, according to an embodiment of the present invention. 5. Detailed Description of Preferred Embodiments of the Invention
The invention will now be described in detail by way of illustrations and examples for purposes of clarity and understanding. It will be readily apparent to those of ordinary skill in the art in light of the teachings of this invention that certain changes and modifications may be made thereto without departing from the spirit or scope of the appended claims. We first provide a top-level system architectural description. Section headings are provided for convenience and are not to be construed as limiting the disclosure, as all various aspects of the invention are described in the several sections that were specifically labeled as such in a heading. Pseudocode examples are presented in this detailed description to illustrate procedures of the present invention. The pseudocode used is, essentially, a computer language using universal computer language conventions. While the pseudocode employed in this description has been invented solely for the purposes of this description, it is designed to be easily understandable by any computer programmer skilled in the art.
For purposes of explanation, the numerical precision of the calculations of the present invention are based on the precision requirements of previous and subsequent stages of the graphics pipeline. The numerical precision to be used depends on a number of factors. Such factors include, for example, order of operations, number of operations, screen size, tile size, buffer depth, sub pixel precision, and precision of data. Numerical precision issues are known, and for this reason will not be described in greater detail herein.
5.1 System Overview
Significant aspects of the structure and method of the present invention include:
(1) a mid-pipeline post tile sorting setup that supports a mid-pipeline sorting unit and supports a mid-pipeline culling unit; (2) a procedure for uniformly describing primitives that allows different types of primitives to share common sets of algorithms/equations/hardware elements in the graphics pipeline; and, (3) tile-relative y-values and screen-relative x-values that allow representation of spatial data on a region by region bases that is efficient and feasible for a tiled based graphics pipeline architecture. Each of these significant aspects are described in greater detail below.
Referring to FIG. 1, there is shown an embodiment of system 100, for performing setup operations in a 3-D graphics pipeline using unified primitive descriptors, post tile sorting setup, tile relative x-values, and screen relative y-values. In particular, FIG.l illustrates how various software and hardware elements cooperate with each other. System 100, utilizes a programmed general-purpose computer 101, and 3-D graphics processor 117. Computer 101 is generally conventional in design, comprising: (a) one or more data processing units ("CPUs") 102; (b) memory 106a, 106b and 106c, such as fast primary memory 106a, cache memory 106b, and slower secondary memory 106c, for mass storage, or any combination of these three types of memory; (c) optional user interface 105, including display monitor 105 a, keyboard 105b, and pointing device 105c; (d) graphics port 114, for example, an advanced graphics port ("AGP"), providing an interface to specialized graphics hardware; (e) 3- D graphics processor 117 coupled to graphics port 114 across I/O bus 112, for providing high-performance 3-D graphics processing; and (e) one or more communication busses 104, for interconnecting CPU 102, memory 106, specialized graphics hardware 114, 3-D graphics processor 117, and optional user interface 105. I/O bus 112 can be any type of peripheral bus including but not limited to an advanced graphics port bus, a Peripheral Component Interconnect (PCI) bus, Industry Standard Architecture (ISA) bus, Extended Industry Standard Architecture (EISA) bus, MicroChannel Architecture, SCSI Bus, and the like. In a preferred embodiment, I O bus 112 is an advanced graphics port pro.
The present invention also contemplates that one embodiment of computer 101 may have a command buffer (not shown) on the other side of graphics port 114, for queuing graphics hardware I/O directed to graphics processor 117.
Memory 106a typically includes operating system 108 and one or more application programs 110, or processes, each of which typically occupies a separate address space in memory 106 at runtime. Operating system 108 typically provides basic system services, including, for example, support for an Application Program Interface ("API") for accessing 3-D graphics API's such as Graphics Device Interface, DirectDraw/Direct3-D and OpenGL. DirectDraw/Direct 3-D, and OpenGL are all well-known APIs, and for that reason are not discussed in greater detail herein. The application programs 110 may, for example, include user level programs for viewing and manipulating images.
It will be understood that a laptop or other type of computer, a workstation on a local area network connected to a server, or a dedicated gaming console can be used instead of computer can also be used in connection with the present invention.
Accordingly, it should be apparent that the details of computer 101 are not particularly relevant to the present invention. Personal computer 101 simply serves as a convenient interface for receiving and transmitting messages to 3-D graphics processor 117. Referring to FIG. 2, there is shown an exemplary embodiment of 3-D graphics processor 117, which may be provided as a separate PC Board within computer 101, as a processor integrated onto the motherboard of computer 101, or as a stand-alone processor, coupled to graphics port 114 across I/O bus 112, or other communication link. Setup 215 is implemented as one processing stage of multiple processing stages in graphics processor 117. (Setup 215 corresponds to "setup stage 8000," as illustrated in United States Provisional Patent Application Serial Number 60/097,336).
Setup 215 is connected to other processing stages 210 across internal bus 211 and signal line 212. Setup 215 is connected to other processing stages 220 across internal bus 216 and signal line 217. Internal bus 211 and internal bus 216 can be any type of peripheral bus including but not limited to a Peripheral Component Interconnect (PCI) bus, Industry Standard Architecture (ISA) bus, Extended Industry Standard Architecture (EISA) bus, MicroChannel Architecture, SCSI Bus, and the like. In a preferred embodiment, internal bus 211 is a dedicated on-chip bus.
5.1.1 Other Processing Stages 210
Referring to FIG. 3, there is shown an example of a preferred embodiment of other processing stages 210, including, command fetch and decode 305, geometry 310, mode extraction 315, and sort 320. We will now briefly discuss each of these other processing stages 210.
Cmd Fetch / Decode 305, or "CFD 305" handles communications with host computer 101 through graphics port 114. CFD 305 sends 2-D screen based data, such as bitmap blit window operations, directly to backend 440 (see FIG. 4), because 2-D data of this type does not typically need to be processed further with respect to the other processing stage in other processing stages 210 or Other Processing Stages 220. All 3-D operation data (e.g., necessary transform matrices, material and light parameters and other mode settings) are sent by CFD 305 to the geometry 310.
Geometry 310 performs calculations that pertain to displaying frame geometric primitives, hereinafter, often referred to as "primitives," such as points, line segments, and triangles, in a 3-D model. These calculations include transformations, vertex lighting, clipping, and primitive assembly. Geometry 310 sends "properly oriented" geometry primitives to mode extraction 315.
Mode extraction 315 separates the input data stream from geometry 310 into two parts: (1) spatial data, such as frame geometry coordinates, and any other information needed for hidden surface removal; and, (2) non-spatial data, such as color, texture, and lighting information. Spatial data are sent to sort 320. The non- spatial data are stored into polygon memory (not shown). (Mode injection 415 (see FIG. 4) with pipeline 200).
Sort 320 sorts vertices and mode information with respect multiple regions in a 2-D window. Sort 320 outputs the spatially sorted vertices and mode information on a region-by-region basis to setup 215. The details of processing stages 210 are not necessary to practice the present invention, and for that reason other processing stages 210 are not discussed in further detail here.
5.1.2 Other Processing Stages 220 Referring to FIG. 4, there is shown an example of a preferred embodiment of other processing stages 220, including, cull 410, mode injection 415, fragment 420, texture 425, Phong Lighting 430, pixel 435, and backend 440. The details of each of the processing stages in Other Processing Stages 220 is not necessary to practice the present invention. However, for purposes of completeness, we will now briefly discuss each of these processing stages.
Cull 410 receives data from a previous stage in the graphics pipeline, such as setup 405, in region-by-region order, and discards any primitives, or parts of primitives that definitely do not contribute to the rendered image. Cull 410 outputs spatial data that are not hidden by previously processed geometry.
Mode injection 415 retrieves mode information (e.g., colors, material properties, etc..) from polygon memory, such as other memory 235, and passes it to a next stage in graphics pipeline 200, such as fragment 420, as required. Fragment 420 interprets color values for Gouraud shading, surface normals for Phong shading, texture coordinates for texture mapping, and interpolates surface tangents for use in a bump mapping algorithm (if required).
Texture 425 applies texture maps, stored in a texture memory, to pixel fragments. Phong 430 uses the material and lighting information supplied by mode injection 425 to perform Phong shading for each pixel fragment. Pixel 435 receives visible surface portions and the fragment colors and generates the final picture. And, backend 139 receives a tile's worth of data at a time from pixel 435 and stores the data into a frame display buffer.
5.2 Setup 21 Overview
Setup 215 receives a stream of image data from a previous processing stage of pipeline 200 In a preferred embodiment of the present invention the previous processing stage is sort 320 (see FIG. 3). These image data include spatial information about geometric primitives (hereinafter, often referred to as "primitives") to be rendered by pipeline 200. The primitives received from sort 320 can include, for example, filled triangles, line triangles, lines, stippled lines, and points. These image data also include mode information, information that does not necessarily apply to any one particular primitive, but rather, probably applies to multiple primitives. Mode information is not processed by the present invention, but simply passed through to a subsequent stage of pipeline 200, for example, cull 410, and for this reason will not be discussed further detail herein.
By the time that setup 215 receives the image data from Sort 320, the primitives have already been sorted, by sort 320, with respect to regions in a 2D window that are intersected by the respective primitives. Setup 215 receives this image data on a region-by-region basis. That is to say that all the primitives that intersect a respective region will be sent to setup 215 before all the primitives that intersect a different respective region are sent to setup 215, and so on. This means that sort 320 may send the same primitive many times, once for each region it intersects, or "touches." In a preferred embodiment of the present invention, each region of the 2-D window is a rectangular tile.
Setup 215 receives the image data from sort 320 either organized in "time order" or in "sorted transparency order." In time order, the time order of receipt by all previous processing stages of pipeline 200 of the vertices and modes within each tile is preserved. That is, for a given tile, vertices and modes are read out of previous stages of pipeline 200 just as they were received, with the exception of when sort 320 is in sorted transparency mode. For purposes of explanation, in sorted transparency mode, "guaranteed opaque" primitives are received by setup 215 first, before setup 215 receives potentially transparent geometry. In this context, guaranteed opaque means that a primitive completely obscures more distant primitives that occupies the same spatial area in a window. Potentially transparent geometry is any geometry that is not guaranteed opaque.
Setup 215 prepares the incoming image data for processing by cull 410. Setup 215 processes one tile's worth of image data, one primitive at a time. When it's done processing a primitive, it sends the data to cull 420 (see FIG. 4) in the form of a primitive packet 6000 (see Table 6). Each primitive packet 6000 output from setup 215 represents one primitive: a triangle, line segment, or point. We now briefly describe cull 410 (see FIG. 4) so that the preparatory processing performed by setup 215 (in anticipation of culling) may be more readily understood.
Cull 410 produces the visible stamp portions, or "VSPs" used by subsequent processing stages in pipeline 200. In a preferred embodiment of the present invention, a stamp is a region two pixels by two pixels in dimension; one pixel contains four sample points; and, one tile has 16 stamps (8x8). However, according to the teaching of the present invention, any convenient number of pixels in a stamp, sample, points in a pixel, and pixels in a tile may be used.
Cull 410 receives image data from setup 215 in tile order (in fact in the order that setup 215 receives the image data from sort 320), and culls out those primitives and parts of primitives that definitely do not contribute to a rendered image. Cull 410 accomplishes this in two stages, the MCC AM cull 410 stage and the Z cull 410 stage. MCC AM cull 410, allows detection of those memory elements in a rectangular, spatially addressable memory array whose "content" (depth values) are greater than a given value. Spatially addressable memory is known.
To prepare the incoming image data for processing by MCCAM cull, setup 215, for each primitive: (a) determines the dimensions of a tight bounding box that circumscribes that part of the primitive that intersects a tile; and, (b) computes a minimum depth value "Zmin,"for that part of the primitive that intersects the tile. This is beneficial because MCCAM cull 410 uses the dimensions of the bounding box and the minimum depth value to determine which of multiple "stamps," each stamp lying within the dimensions of the bounding box, may contain depth values less than Zmin. The procedures for determining the dimensions of a bounding box and the procedures for producing a minimum depth value are described in greater detail below. (For purposes of simplifying the description, those stamps that lie within the dimensions of the bounding box are hereinafter, referred to as "candidate stamps.")
Z cull 410 refines the work performed by MCCAM cull 410 in the process of determining which samples are visible, by taking these candidates stamps, and if they are part of the primitive, computing the actual depth value for samples in that stamp. This more accurate depth value is then compared, on a sample-by-sample basis, to the z- values stored in a z-buffer memory in cull 410 to determine if the sample is visible. A sample-by-sample basis simply means that each sample is compared individually, as compared to a step where a whole bounding box is compared at once.
For those primitives that are lines and triangles, setup 215 also calculates spatial derivatives. A spatial derivative is a partial derivative of the depth value. Spatial derivatives are also known as Z-slopes, or depth gradients. As discussed above, the minimum depth value and a bounding box are utilized by MCCAM cull 410. Setup 215 also determines a reference stamp in the bounding box (described in greater detail below) that contains the vertex with the minimum z-value (discussed in greater detail below in section 5.4.10). The depth gradients and zref are used by Z-cull 410. Line (edge) slopes, intersections, and corners (top and bottom) are used by Z-cull 410 for edge walking. - lo -
5.2.1 Interface I/O With Other Processing Stages of the Pipeline
Setup 215 interfaces with a previous stage of pipeline 200, for example, sort 320 (see FIG. 3), and a subsequent stage of pipeline 200, for example, cull 410 (see FIG. 4). We now discuss sort 320 output packets.
5.2.1.1 Sort 320 Setup 215 Interface
Referring to table 1, there is shown an example of a begin frame packet 1000, for delimiting the beginning of a frame of image data. Begin frame packet 1000 is received by setup 215 from sort 320. Referring to table 2, there is shown an example of a begin tile packet 2000, for delimiting the beginning of that particular tile's worth of image data.
Referring to table 4, there a shown an example of a clear packet 4000, for indicating a buffer clear event. Referring to table 5, there is shown an example of a cull packet 5000, for indicating, among other things the packet type 5010. Referring to table 6, there is shown an example of an end frame packet 6000, for indicating by sort 320, the end of a frame of image data. Referring to table 7, there is shown an example of a primitive packet 7000, for identifying information with respect to a primitive. Sort 320 sends one primitive packet 7000 to setup 215 for each primitive.
5.2.1.2 Setup 215 Cull 410 Interface
Referring to table 8, there is shown an example of a setup output primitive packet 8000, for indicating to a subsequent stage of pipeline 200, for example, cull 410, a primitive's information, including, information determined by setup 215. Such setup 215 determined information is discussed in greater detail below.
5.2.2 Setup Primitives
To set the context of the present invention, we briefly describe geometric primitives, including, for example, polygons, lines, and points.
5.2.2.1 Polygons Polygons arriving at setup 215 are essentially triangles, either filled triangles or line mode triangles. A filled triangle is expressed as three vertices. Whereas, a line mode triangle is treated by setup 215 as three individual line segments. Setup 215 receives window coordinates (x, y, z) defining three triangle vertices for both line mode triangles and for filled triangles. Note that the aliased state of the polygon (either aliased or anti-aliased) does not alter the manner in which filled polygon setup is performed by setup 215. Line mode triangles are discussed in greater detail below.
5.2.2.2 Lines Line segments arriving at setup 215 essentially comprise a width, and two end points. Setup 215 does not modify the incoming line widths. A line segment may be stippled. A line segment may be aliased or anti-aliased. a preferred embodiment of the present invention, a line's width is determined prior to setup 215. For example, it can be determined on a 3-D graphics processing application executing on computer 101 (see FIG. 1).
5.2.2.3 Points
Pipeline 200 renders anti-aliased points as circles and aliased points as squares. Both circles and squares have a width. In a preferred embodiment of the present invention, the determination of a point's size and position are determined in a previous processing stage of pipeline 200, for example, geometry 310.
5.3 Unified Primitive Description
Under the rubric of a unified primitive, we consider a line segment primitive to be a rectangle and a triangle to be a degenerate rectangle, and each is represented mathematically as such. We now discuss a procedure for uniformly describing primitives that allows different types of primitives to share common sets of algorithms/equations/hardware elements in the graphics pipeline.
Setup 215 describes each primitive with a set of four vertices. Note that not all vertex values are needed to describe all primitives. To describe a triangle, setup 215 uses a triangle's top vertex, bottom vertex, and either left corner vertex or right corner vertex, depending on the triangle's orientation. A line segment, is treated as a parallelogram, so setup 215 uses all four vertices to describe a line segment. FIG. 16 shows example of quadrilaterals generated for line segments. Note that quadrilaterals are generated differently for aliased and anti-aliased lines. For aliased lines a quadrilateral's vertices also depend on whether the line is x-major or y-major. Note also that while a triangle's vertices are the same as its original vertices, setup 215 generates new vertices to represent a line segment as a parallelogram.
The unified representation of primitives uses two sets of descriptors to represent a primitive. The first set includes vertex descriptors, each of which are assigned to the original set of vertices in window coordinates. Vertex descriptors include, VtxYMin, VtxYmax, VtxXmin and VtxXmax. The second set of descriptors are flag descriptors, or corner flags, used by setup 215 to indicate which vertex descriptors have valid and meaningful values. Flag descriptors include, VtxLeftC, VtxRightC, LeftCorner, RightCorner, VtxTopC, VtxBotC, TopCorner, and BottomCorner. FIG. 23 illustrates aspects of unified primitive descriptor assignments, including corner flags. All of these descriptors have valid values for line segment primitives, but all of them may not be valid for triangles. Treating triangles as rectangles according to the teachings of the present invention, involves specifying four vertices, one of which (typically y-left or y-right in one particular embodiment) is degenerate and not specified. To illustrate this, refer to FIG. 22, and triangle 20, where a left corner vertex (VtxLeftC) is degenerate, or not defined. With respect to triangle 10, a right corner vertex (VtxRightC) is degenerate. Using primitive descriptors according to the teachings of the present invention to describe triangles and line segments as rectangles provides a nice, uniform way to setup primitives, because the same (or similar) algorithms/equations/calculations/hardware can be used to operate on different primitives, such as, for example, edge walking algorithm in cull 410 (see FIG. 4), thus allowing for more streamlined implementation of logic. We now describe how the primitive descriptors are determined.
In a preferred embodiment of the present invention, for line segments VtxYmax, VtxLeftC, VtxRightC, LeftCorner, RightCorner descriptors are assigned when line quadrilateral vertices are generated (see section 5.4.5.1). VtxYmin is the vertex with the minimum y value. VtxYmax is the vertex with the maximum y value. VtxLeftC is the vertex that lies to the left of the diagonal formed by joining the vertices VtxYmin and VtxYmax for line segments. VtxRightC is the vertex that lies to the right of the diagonal formed by joining the vertices VtxYmin and VtxYmax for line segments.
Referring to Fig. 5, we will now described one embodiment of how VtxYmin, VtxYmax, VtxLeftC, VtxRightC, LeftCorner, RightCorner descriptors are obtained for triangles. At step 5, the vertices are sorted with respect to the y-direction. The procedures for sorting a triangles coordinates with respect to y are discussed in greater detail below in section 5.4.1.1. At step 10, VtxYmin, the vertex with the minimum y value, and VtxYmax, the vertex with the maximum y value are assigned their respective values in a similar manner as that described immediately above with respect to line segments.
At step 15 is determined whether a long y-edge is equal to a left edge. For purposes of illustrating aspects of mapping to a triangle long x-edge, long y-edge, top edge, bottom edge, right edge, and left edge, refer to FIG. 8. A triangle has exactly two edges that share a top most vertex (VtxYmax). Of these two edges, the one edge with an end point furthest left is the left edge. Analogous to this, the one edge with an end point furthest to the right is the right edge.
Referring to figure 5, if the long y-edge is equal to the left edge (step 15), at step 25 LeftCorner is set to equal FALSE, meaning that VtxLeftC is degenerate, or not defined. If the long y-edge is not equal to the left edge (step 15), at step 20, procedure for uniformly describing primitives 500 assigns a value to VtxLeftC and sets
LeftCorner equal to TRUE. For triangles, VtxLeftC is the vertex that lies to the left of the edge of the triangle formed by joining the vertices VtxYmin and VtxYmax (hereinafter, also referred to as the "long y-edge"). The procedure for determining whether a triangle has a left corner is discussed in greater detail below 5.4.1.3. At step 30, it is determined whether the long y-edge is equal to the right edge, and if so, at step 35, RightCorner is set to equal FALSE, representing that VtxRightC is degenerate, or undefined. However, if long y-edge is not equal to the right edge (step 30), at step 40, a value is assigned to VtxRightC and RightCorner is set to TRUE, indicating that VtxRightC contains a valid value. VtxRightC is the vertex that lies to the right of the long y-edge in the case of a triangle. The procedure for determining whether a triangle has a right corner is discussed in greater detail below 5.4.1.3. Note that in practice VtxYmin, VtxYmax, VtxLeftC, and VtxRightC are indices into the original primitive vertices. Setup 215 uses VtxYMin, VtxYmax, VtxLeftC, VtxRightC, LeftCorner, and RightCorner to clip a primitive with respect to the top and bottom edges of the tile. Clipping will be described in greater detail below in section 5.4.6.
In a preferred embodiment of the present invention, for line segments VtxXmin, VtxXmax, VtxTopC, VtxBotC, TopCorner, BottomCorner descriptors are assigned when the line quad vertices are generated (see section 5.4.5.1). VtxXmin is the vertex with the minimum x value. VtxXmax is the vertex with the maximum x value. VtxTopC is the vertex that lies above the diagonal formed by joining the vertices VtxXmin and VtxXmax for parallelograms. VtxBotC is the vertex that lies below the long x-axis in the case of a triangle, and below the diagonal formed by joining the vertices VtxXmin and VtxXmax.
Referring to figure 7, we now describe procedure for determining a set of unified primitive descriptors for a triangle primitive with respect to the x-coordinates. In particular, we illustrate how VtxXmin, VtxXmax, VtxTopC, VtxBotC, TopCorner, BottomCorner descriptors are obtained.
At step 5, for the vertices are sorted with respect to the x-direction. The procedures for sorting a triangles coordinates with respect to x are discussed in greater detail below in section 5.4.1.4. At step 10, VtxXmin and VtxXmax are assigned values as for the discussion immediately above with respect to line segments. At step 15 it is determined whether the triangle's long x-edge is equal to the triangles top edge, and if so, at step 20, TopCorner is set to equal false indicating that VtxTopC is degenerate, or not defined. The top edge is a triangle has to edges that share the maximum x-vertex (VtxXmax). The topmost of these two edges is the "top edge." analogous to this, the bottom most of these two edges is the "bottom edge."
If the triangle's long x-edge is not equal to the triangles top edge (step 15), at step 25, VtxTopC is assigned an appropriate value and TopCorner is set to equal TRUE, indicating that VtxTopC contains a valid value. The appropriate value for VtxTopC is the vertex that lies above the edge joining vertices VtxXmin and VtxXmax (hereinafter, this edge is often referred to as the "long x-edge"). The procedure for determining whether a triangle has a top corner is discussed in greater detail below 5.4.1.5. At step 30, it is determined whether the long x-edge is equal to the bottom edge, and if so, at step 40, BottomCorner is set to equal FALSE, indicating that VtxBotC is degenerate, or not defined. If the long x-edge is not equal to the bottom edge (step 30), then an appropriate value is assigned to VtxBotC and BottomCorner is set to equal TRUE, indicating that VtxBotC contains a valid value. The appropriate value for VtxBotC is the vertex that lies below the long x-axis. The procedure for determining whether a triangle has a bottom corner is discussed in greater detail below 5.4.1.5.
Note, that in practice VtxXmin, VtxXmax, VtxTopC, and VtxBotC are indices into the original triangle primitive. Setup 215 uses VtxXmin, VtxXmax, VtxTopC, VtxBotC, TopCorner, and BottomCorner to clip a primitive with respect to the left and right edges of a tile. Clipping will be described in greater detail below.
To illustrate the use of the unified primitive descriptors of the present invention, refer to FIG. 23, where there is shown an illustration of multiple triangles and line segments described using vertex descriptors and flag descriptors according to a preferred embodiment of the unified primitive description of the present invention.
In this manner the procedure for uniformly describing primitives allows different types of primitives to share common sets of algorithms/equations/hardware elements in the graphics pipeline.
5.4 High Level Functional Unit Architecture
Setup's 215 I O subsystem architecture is designed around the need to process primitive and mode information received from sort 315 (see FIG. 3) in a mariner that is optimal for processing by cull 410 (see FIG. 4). To accomplish this task, setup 215 performs a number of procedures to prepare information about a primitive with respect to a corresponding tile for cull 410.
As illustrated in FIG. 8, an examination of these procedures yields the following functional units which implement the corresponding procedures of the present invention: (a) triangle preprocessor 2, for generating unified primitive descriptors, calculating line slopes and reciprocal slopes of the three edges, and determining if a triangle has a left or right corner; (b) line preprocessor 2, for determining the orientation of a line, calculating the slope of the line and the reciprocal, identifying left and right slopes and reciprocal slopes, and discarding end- on lines; (c) point preprocessor 2, for calculating a set of spatial information required by a subsequent culling stage of pipeline 200; (d) trigonometric unit 3, for calculating the half widths of a line, and trigonometric unit for processing anti-aliased lines by increasing a specified width to improved image quality; (d) quadrilateral generation unit 4, for converting lines into quadrilaterals centered around the line, and for converting aliased points into a square of appropriate width; (d) clipping unit 5, for clipping a primitive (triangle or quadrilateral) to a tile, and for generating the vertices of the new clipped polygon; (e) bounding box unit 6, for determining the smallest box that will enclose the new clipped polygon; (f) depth gradient and depth offset unit 7, for calculating depth gradients (dz/dx & dz/dy) of lines or triangles - for triangles, for also determining the depth offset; and, (g) Zmin and Zref unit 8, for determining minimum depth values by selecting a vertex with the smallest Z value, and for calculating a stamp center closest to the Zmin location.
FIG. 8 illustrates a preferred embodiment of the present invention where triangle preprocessor unit 2, line preprocessor unit 2, and point preprocessor unit 2 are located the same unit 2. However, other in yet other embodiments, each respective unit can be implemented as a different unit.
In one embodiment of the present invention, input buffer 1 comprises a queue and a holding buffer. In a preferred embodiment of the present invention, the queue is approximately 32 entries deep by approximately 140 bytes wide. Input data packets from a subsequent process in pipeline 200, for example, sort 320, requiring more bits then the queue is wide, will be split into two groups and occupy two entries in the queue. The queue is used to balance the different data rates between sort 320 (see FIG. 3) and setup 215. The present invention contemplates that sort 320 and setup 215 cooperate if input queue 1 reaches capacity. The holding buffer holds vertex information read from a triangle primitive embrace the triangle into the visible edges for line mode triangles.
Output buffer 10 is used by setup 215 to queue image data processed by setup 215 for delivery to a subsequent stage of pipeline 200, for example, cull 410. As discussed above, FIG. 8 also illustrates the data flow between the functional units that implement the procedures of the present invention.
The following subsections detail the architecture and procedures of each of these functional units. 5.4.1 Triangle Preprocessing
For triangles, Setup starts with a set of vertices, (xO, yO, zO), (xl, yl, zl), and (x2, y2, z2). Setup 215 assumes that the vertices of a filled triangle fall within a valid range of window coordinates, that is to say, that a triangle's coordinates have been clipped to the boundaries of the window. This procedure can be performed by a previous processing stage of pipeline 200, for example, geometry 310 (see FIG. 3).
In a preferred embodiment of the present invention, triangle preprocessing unit 2 first generates unified primitive descriptors for each triangle that it receives. Refer to section 5.3 for greater detailed discussion of unified primitive descriptors. The triangle preprocessor: (1) sorts the three vertices in the y direction, to determine the top-most vertex (VtxYmax), middle vertex (either, VtxRightC or VtxLeftC), and bottom-most vertex (VtxYmin); (2) calculates the slopes and reciprocal slopes of the triangles three edges; (3) determines if the y-sorted triangle has a left comer (LeftCorner) or a right comer (RightCorner); (5) sorts the three vertices in the x-direction, to determine the right-most vertex (VtxXmax), middle vertex, and leftmost vertex (VtxXmin); and, (6) identifies the slopes that correspond to x-sorted Top (VtxTopC), Bottom (VtxBotC), or Left.
5.4.1.1 Sort With Respect to the Y Axis The present invention sorts the filled triangles vertices in the y-direction using, for example, the following three equations.
Y,GeY0 = (Y, > Y0) I ( (Yl = Y0) & (XI > X0) )
Y2GeY, = (Y2 > Yt) I ( (Y2 =Yl) & (X2 > XI) ) Y0GeY2 = (Y0 > Y2) | ( (Y0 = Y2) & (X0 > X2) )
With respect to the immediately above three equations: (a) "Ge" represents a greater than or equal to relationship; (b) the "|" symbol represents a logical "or"; and, (c) the "&" symbol represents a logical "and." YlGeYO, Y2GeYl, and Y0GeY2 are Boolean values.
The time ordered vertices are V0, VI, and V2, where V0 is the oldest vertex, and V2 is the newest vertex. Pointers are used by setup 215 to identify which time-ordered vertex corresponds to which Y-sorted vertex, including, top (VtxYmax), middle (VtxLeftC or VtxRightC), and bottom (VtxYmin). For example,
YsortTopSrc - {Y2GeY, & !Y0GeY2, Y,GeY0 & !Y2GeY„ !Y,GeY0 & Y0GeY2} YsortMidSrc = {Y2GeY, A !Y0GeY2, Y,GeY0 © !Y2GeY„ lYjGeYo e Y0GeY2} YsortBotSrc = {!Y2GeY, & Y0GeY2, !Y,GeY0 & Y2GeY„ Y,GeY0 & !Y0GeY2}
YsortTopSrc represents three bit encoding to identify which of the time ordered vertices is VtxYmax. YsortMidSrc represents three bit encoding to identify which of the time ordered vertices is VtxYmid. YsortBotSrc represents three bit encoding to identify which of the time ordered vertices is VtxYmin.
Next, pointers to map information back and forth from y-sorted to time ordered, time ordered to y-sorted, and the like, are calculated. Analogous equations are used to identify the destination of time ordered data to x-sorted order.
YsortOdest = {!Y,GeY0 & Y0GeY2, !Y,GeY0 e Y0GeY2, Y,GeY0 & !Y0GeY2} Ysortldest = {Y^eY,, & !Y2GeY„ Y^eY,, © !Y2GeY„ !Y,GeY0 & Y2GeY,} Ysort2dest = {Y2GeY, & !Y0GeY2, Y2GeY, Φ !Y0GeY2, ! Y2GeY0 & Y0GeY2}
The symbol "!" represents a logical "not." YsortOdest represents a pointer that identifies that VO corresponds to which y-sorted vertex. Ysortldest represents a pointer that identifies that VI corresponds to which y-sorted vertex. Ysort2dest represents a pointer that identifies that V2 corresponds to which y-sorted vertex. Call the de-referenced sorted vertices: Vτ = (Xτ, Yτ, Zτ), VB = (XB, YB, ZB), and VM = (XM, YM, ZM), where Vτ has the largest Y and VB has the smallest Y. The word de-referencing is used to emphasize that pointers are kept. Vτ is VtxYmax, VB is VtxYmin, and VM is VtxYmid.
Reciprocal slopes (described in greater detail below) need to be mapped to labels corresponding to the y-sorted order, because V0, VI and V2 part-time ordered vertices. S01, SI 2, and S20 are slopes of edges respectively between: (a) V0 and VI; (b) VI and V2; and, (c) V2 and V0. So after sorting the vertices with respect to y, we will have slopes between Vτ and VM, Vτ and VB, and VM abd VB. In light of this, pointers are determined accordingly.
Vτ and VM, Vτ and VB, and
A preferred embodiment of the present invention maps the reciprocal slopes to the following labels: (a) YsortSTMSrc represents STM (Vτ and VM) corresponds to which time ordered slope; (b) YsortSTBSrc represents STB (Vτ and VB) corresponds to which time ordered slope; and, (c) YsortSMBSrc represents SMB (VM and VB) corresponds to which time ordered slope.
//Pointers to identify the source of the slopes (from time ordered to y-sorted). "Source" //simply emphasizes that these are pointers to the data.
//encoding is 3bits, "one-hot" {SI 2, SOI, S20}. One hot means that only one bit can be a //"one."
//1,0,0 represents S12; 0,1,0 represents S01; 0,0,1 represents S20.
YsortSTMSrc = { !Ysortldest[0] & !Ysort2dest[0], !Ysort0dest[0] & !Ysortldest[0], !Ysort2dest[0] & !Ysort0dest[0] }
YsortSTBSrc = { !Ysortldest[l] & !Ysort2dest[l], !Ysort0dest[l] & !Ysortldest[l], !Ysort2dest[l] & !Ysort0dest[l] } YsortSMBSrc = { !Ysortldest[2] & !Ysort2dest[2], !Ysort0dest[2] & !Ysortldest[2],
!Ysort2dest[2] & !Ysort0dest[2] }
The indices refer to which bit is being referenced .
Whether the middle vertex is on the left or the right is determined by comparing the slopes dx2/dy of line formed by vertices v[i2] and v[il], and dxO/dy of the line formed by vertices v[i2] and v[i0]. If (dx2/dy > dxO/dy) then the middle vertex is to the right of the long edge else it is to the left of the long edge. The computed values are then assigned to the primitive descriptors. Assigning the x descriptors is similar. We thus have the edge slopes and vertex descriptors we need for the processing of triangles.
5.4.1.2 Slope Determination
The indices sorted in ascending y-order are used to compute a set of (dx/dy) derivatives. And the indices sorted in ascending x-order used to compute the (dy/dx) derivatives for the edges. The steps are (1) calculate time ordered slopes SOI, SI 2, and, S20; (2) map to y-sorted slope STM, SMB, and STB; and, (3) do a slope comparison to map slopes to SLEFT, SRIGHT, and SBOTTOM.
The slopes are calculated for the vertices in time order. That is, (X0, Y0) represents the first vertex, or "VO" received by setup 215, (XI, Yl) represents the second vertex, or "V2" received by setup 215, and (X2, Y2) represents the third vertex, or V3 received by setup 215.
(Slope between VI and VO.).
(Slope between V2 and VI .).
(Slope between V0 and V2.).
Figure imgf000028_0001
In Other Processing Stages 220 in pipeline 200, the reciprocals of the slopes are also required, to calculate intercept points in clipping unit 5 (see FIG. 8). In light of this, the following equations are used by a preferred embodiment of the present invention, to calculate the reciprocals of slopes, S01, S12, and S20:
SN0] (Reciprocal slope between VI and V0.).
Figure imgf000028_0002
(Reciprocal slope between V2 and VI .).
Figure imgf000029_0001
(Reciprocal slope between V0 and V2.).
Figure imgf000029_0002
Referring to FIG. 9, there are shown examples of triangle slope assignments. A left slope is defined as slope of dy/dx where "left edge" is defined earlier. A right slope is defined as slope of dy/dx where "right edge" is defined earlier. A bottom slope is defined as the slope of dy/dx where the y-sorted "bottom edge" is defined earlier. (There is also an x-sorted bottom edge.)
5.4.1.3 Determine Y-sorted Left Comer or Right Comer
Call the de-referenced reciprocal slopes SNTM (reciprocal slope between VT and VM), SNTB (reciprocal slope between VT and VB) and SNMB (reciprocal slope between VM and VB). These de-referenced reciprocal slopes are significant because they represent the y-sorted slopes. That is to say that they identify slopes between y- sorted vertices.
Referring to FIG. 10, there is shown yet another illustration of slope assignments according to one embodiment of the present invention for triangles and line segments. We will now describe a slope naming convention for purposes of simplifying this detailed description.
For example, consider slope "SlStrtEnd," "SI" is for slope, "Strt" is first vertex identifier and "End" is the second vertex identifier of the edge. Thus, SlYmaxLeft represents the slope of the left edge - connecting the VtxYMax and VtxLeftC. If leftC is not valid then, SlYmaxLeft is the slope of the long edge. The letter r in front indicates that the slope is reciprocal. A reciprocal slope represents ( y/ x) instead of ( x y).
Therefore, in this embodiment, the slopes are represented as {SlYmaxLeft, SlYmaxRight, SlLeftYmin, SlRightYmin} and the inverse of slopes ( y/ x) {rSlXminTop, rSlXminBot, rSlTopXmax, rSLBotXmax} . In a preferred embodiment of the present invention, setup 215 compares the reciprocal slopes to determine the LeftC or RightC of a triangle. For example, if YsortSNTM is greater than or equal to YsortSNTB, then the triangle has a left comer, or "LeftC" and the following assignments can be made: (a) set LeftC equal to true (" 1 "); (b) set RightC equal to false ("0"); (c) set YsortSNLSrc equal to YsortSNTMSrc (identify pointer for left slope); (d) set YsortSNRSrc equal to YsortSNTB Src (identify pointer for right slope); and, (e) set YsortSNBSrc equal to YsortSNMBSrc (identify pointer bottom slope).
However, if YsortSNTM is less than YsortSNTB, then the triangle has a right comer, or "RightC" and the following assignments can be made: (a) set LeftC equal to false ("0"); (b) RightC equal to true ("1"); (c) YsortSNLSrc equal to YsortSNTBSrc (identify pointer for left slope); (d) sortSNRSrc equal to YsortSNTMSrc (identify pointer for right slope); and, (e) set YsortSNBSrc equal to YsortSNMBSrc (identify pointer bottom slope).
5.4.1.4 Sort Coordinates With Respect to the X Axis
The calculations for sorting a triangle's vertices with respect to "y" also need to be repeated for the triangles vertices with respect to "x," because an algorithm used in the clipping unit 5 (see FIG. 8) needs to know the sorted order of the vertices in the x direction. The procedure for sorting a triangle's vertices with respect to "x" is analogous to the procedure's used above for sorting a triangle's vertices with respect to "y," with the exception, of course, that the vertices are sorted with respect to "x," not "y." However for purposes of completeness and out of an abundance of caution to provide an enabling disclosure the equations for sorting a triangles vertices with respect to "x" are provided below.
For the sort, do six comparisons, including, for example: X,GeX0 - (X, > X0) I ( (XI = X0) & (Yl > Y0) ) X2GeX, = (X2 > X,) I ( (X2 = XI) & (Y2 > Yl) ) X0GeX2 = (X0 > X2) I ( (X0 == X2) & (Y0 > Y2) )
The results of these comparisons are used to determine the sorted order of the vertices. Pointers are used to identify which time-ordered vertex corresponds to which Y-sorted vertex. In particular, pointers are used to identify the source (from the time- ordered (VO, VI and V2) to X-sorted ("destination" vertices VL, VR, and VM)). As noted above, "source" simply emphasizes that these are pointers to the data.
XsortRhtSrc - {X2GeX, & !XoGeX2, X,GeXo & !X2GeX„ !X,GeX0 & X0GeX2} XsortMidSrc = {X2GeX, A !X0GeX2, X,GeX0 Φ !X2GeX„ !X,GeX0 Φ X0GeX2}
XsortLftSrc = {!X2GeX, & X0GeX2, !X,GeXo & X2GeX„ X,GeX0 & !X0GeX2}
Next, setup 215 identifies pointers to each destination (time-ordered to X- sorted).
XsortOdest = {IXIGeXO & X0GeX2, IXIGeXO X0GeX2, XlGeXO & !X0GeX2}. Xsortldest = {XlGeXO & !X2GeXl, XlGeXO !X2GeXl, IXIGeXO & X2GeXl}. Xsort2dest = {X2GeXl & !X0GeX2, X2GeXl !X0GeX2, !X2GeX0 & X0GeX2} .
Call the de-referenced sorted vertices VR = (XR, YR, ZR), VL = (XL, YL,
ZL), and VM = (XM, YM, ZM), where VR has the largest X and VL has the smallest X. Note that X sorted data has no ordering information available with respect to Y or Z. Note also, that X, Y, and Z are coordinates, "R" equals "right," "L" = "left," and "M" equals "middle." Context is important: y-sorted VM is different from x-sorted VM.
The slopes calculated above, need to be mapped to labels corresponding to the x-sorted order, so that we can identify which slopes correspond to which x-sorted edges. To accomplish this, one embodiment of the present invention determines pointers to identify the source of the slopes (from time ordered to x-sorted). For example, consider the following equations:
XsortSRMSrc = {!Xsortldest[0] & !Xsort2dest[0], !Xsort0dest[0] & !Xsortldest[0], !Xsort2dest[0] & !Xsort0dest[0] } ; XsortSRLSrc - {!Xsortldest[l] & !Xsort2dest[l], !Xsort0dest[l] & !Xsortldest[l], !Xsort2dest[l] & !Xsort0dest[l] }; and, XsortSMLSrc - {!Xsortldest[2] & !Xsort2dest[2], !Xsort0dest[2] & !Xsortldest[2], !Xsort2dest[2] & !Xsort0dest[2] }, where, XsortSRMSrc represents the source (VO, VI, and V2) for SRM slope between VR and VM; XsortSRLSrc represents the source for SRL slope, and XsortSMLSrc represents the source for SML slope.
Call the de-referenced slopes XsortSRM (slope between VR and VM), XsortSRL (slope between VR and VL) and XsortSML (slope between VM and VL).
5.4.1.5 Determine X Sorted Top Comer or Bottom Comer and Identify Slopes Setup 215 compares the slopes to determine the bottom comer (BotC or
BottomCorner) or top comer (TopC or TopCorner) of the x-sorted triangle. To illustrate this, consider the following example, where SRM represents the slope between x-sorted VR and VM, and SRL represents the slope coming x-sorted VR and VL. If SRM is greater than or equal to SRL, then the triangle has a BotC and the following assignments can be made: (a) set BotC equal to t e ("1"); (b) set TopC equal to false ("0"); (c) set XsortSBSrc equal to XsortSRMSrc (identify x-sorted bot slope); (d) set XsortSTSrc equal to XsortSRLSrc (identify x-sorted top slope); and, (e) set XsortSLSrc equal to XsortSMLSrc (identify x-sorted left slope).
However, if SRM is less than SRL, then the triangle has a top comer (TopCorner or TopC) and the following assignments can be made: (a) set BotC equal to false; (b) set TopC equal to tme; (c) set XsortSBSrc equal to XsortSRLSrc (identify x-sorted bot slope); (d) set XsortSTSrc equal to XsortSRMSrc (identify x-sorted top slope); and, (e) set XsortSLSrc equal to XsortSMLSrc (identify x-sorted left slope).
V0, VI, and V2 are time ordered vertices. S01, SI 2, and S20 are time ordered slopes. X-sorted VR, VL, and VM are x-sorted right, left and middle vertices. X- sorted SRL, SRM, and SLM are slopes between the x-sorted vertices. X-sorted ST, SB, and SL are respectively x-sorted top, bottom, and left vertices. BotC, if tme means that there is a bottom comer, likewise for TopC and top comer.
5.4.2 Line Segment Preprocessing
The object of line preprocessing unit 2 (see figure 6) is to: (1) determine orientation of the line segment (a line segment's orientation includes, for example, the following: (a) a determination of whether the line is X-major or Y-major; (b) a determination of whether the line segment is pointed right or left (Xcnt); and, (c) a determination of whether the line segment is pointing up or down (Ycnt).), this is beneficial because Xcnt and Ycnt represent the direction of the line, which is needed for processing stippled line segments; and (2) calculating the slope of the line and reciprocal slope, this is beneficial because the slopes are used to calculate the tile intersection pointed also passed to cull 410 (see FIG. 4).
We will now discuss how this unit of the present invention determines a line segment's orientation with respect to a corresponding tile of the 2-D window.
5.4.2.1 Line Orientation
Referring to FIG. 11, there is shown an example of aspects of line orientation according to one embodiment of the present invention. We now discuss an exemplary procedure used by setup 215 for determining whether a line segment points to the right or pointing to the left. DX01=X1-X0.
If DXOl is greater than zero, then setup 215 sets XCnt equal to "up," meaning that the line segment is pointing to the right. In a preferred embodiment of the present invention, "up" is represented by a "1," and down is represented by a "0." Otherwise, if DXOl is less than or equal to zero, setup 215 sets XCnt equal to down, that is to say that the line segment is pointing down. DXOl is the difference between XI and X0.
We now illustrate how the present invention determines whether the line segment points up or down.
DY01=Y1-Y0;
If DY01 > 0,
Then, Ycnt = up, that is to say that the line is pointing up. Else, Ycnt = dn, that is to say that the line is pointing down.
// Determine Major = X or Y (Is line Xmajor or Ymajor?) If |DX01| >= |DY01| Then Major = X Else Major = Y 5.4.2.2 Line Slopes
Calculation of line's slope is beneficial because both slopes and reciprocal slopes are used in calculating intercept points to a tile edge in clipping unit 5. The following equation is used by setup 215 to determine a line's slope.
Figure imgf000034_0001
The following equation is used by setup 215 to determine a line's reciprocal slope.
Figure imgf000034_0002
FIG. 12 illustrates aspects of line segment slopes. Setup 215 now labels a line's slope according to the sign of the slope (S01) and based on whether the line is aliased or not. For non-antialiased lines, setup 215 sets the slope of the ends of the lines to zero. (Infinite dx/dy is discussed in greater detail below).
If S01 is greater than or equal to 0: (a) the slope of the line's left edge (SL) is set to equal S01; (b) the reciprocal slope of the left edge (SΝL) is set to equal SN01; (c) if the line is anti-aliased, setup 215 sets the slope of the line's right edge (SR) to equal - SN01, and setup 215 sets the reciprocal slope of the right edge (SNR) to equal -S01; (d) if the line is not antialiased, the slope of the lines right edge, and the reciprocal slope of right edge is set to equal zero (infinite dx/dy); (e) LeftCorner, or LeftC is set to equal tme ("1"); and, (f) RightCorner, or RightC is set to equal tme.
However, if S01 less than 0: (a) the slope of the line's right edge (SR) is set to equal S01; (b) the reciprocal slope of the right edge (SNR) is set to equal SN01; (c) if the line is anti-aliased, setup 215 sets the slope of the line's left edge (SL) to equal -SN01, and setup 215 sets the reciprocal slope of the left edge (SNL) to equal -S01; (d) if the line is not antialiased, the slope of the lines left edge, and the reciprocal slope of left edge is set to equal zero; (e) LeftCorner, or LeftC is set to equal tme ("1"); and, (f) RightCorner, or RightC is set to equal tme. Note the commonality of data: (a) SR SNR; (b) SL/SNR; (c) SB/SNB (only for triangles);(d) LeftC/RightC; and, (e) the like.
To discard end-on lines, or line that are viewed end-on and thus ,are not visible, setup 215 determines whether ( yi - y0 - 0 ) and ( xλ - x0 = 0 ), and if so, the line will be discarded.
5.4.2.3 Line Mode Triangles
Setup 215 receives edge flags in addition to window coordinates (x, y, z) coπesponding to the three triangle vertices. Referring to table 6, there is shown edge flags (LineFlags) 5, having edge flags. These edge flags 5 tell setup 215 which edges are to be drawn. Setup 215 also receives a "factor" (see table 6, factor (ApplyOffsetFactor) 4) used in the computation of polygon offset. This factor is factor "f ' and is used to offset the depth values in a primitive. Effectively, all depth values are to be offset by an amount equal to offset equals max [|Zx|,|Zy|] plus factor. Factor is supplied by user. Zx is equal to dx/dz. Zy is equal to dy/dz. The edges that are to be drawn are first offset by the polygon offset and then drawn as ribbons of width w (line attribute). These lines may also be stippled if stippling is enabled.
For each line polygon, setup 215: (1) computes the partial derivatives of z along x and y (note that these z gradients are for the triangle and are needed to compute the z offset for the triangle; these gradients do not need to be computed if factor is zero); (2) computes the polygon offset, if polygon offset computation is enabled, and adds the offset to the z value at each of the three vertices; (3) traverses the edges in order; if the edge is visible, then setup 215 draws the edge using line attributes such as the width and stipple (setup 215 processes one triangle edge at a time); (4) draw the line based on line attributes such as anti-aliased or aliased, stipple, width, and the like; and, (5) assign appropriate primitive code to the rectangle depending on which edge of the triangle it represents and send it to Cull 410. A "primitive code" is an encoding of the primitive type, for example, 01 equals a triangle, 10 equals a line, and 11 equals a point.
5.4.2.4 Stippled Line Processing
Given a line segment, stippled line processing utilizes "stipple information," and line orientation information (see section 5.2.5.2.1 Line Orientation) to reduce unnecessary processing by setup 215 of quads that lie outside of the current tile's boundaries. In particular, stipple preprocessing breaks up a stippled line into multiple individual line segments. Stipple information includes, for example, a stipple pattern (LineStipplePattem) 6 (see table 6), stipple repeat factor (LineStippleRepeatF actor) r 8, stipple start bit (StartLineStippleBitl and StartLineStippleBitl), for example stipple start bit 12, and stipple repeat start (for example, StartStippleRepeatFactorO) 23 (stplRepeatStart)).
In a preferred embodiment of pipeline 200, Geometry 315 is responsible for computing the stipple start bit 12, and stipple repeat start 23 offsets at the beginning of each line segment. We assume that quadrilateral vertex generation unit 4 (see FIG. 8) has provided us with the half width displacements.
Stippled Line Preprocessing will break up a stippled line segment into multiple individual line segments, with line lengths corresponding to sequences of 1 bits in a stipple pattern, starting at stplStart bit with a further repeat factor start at stplRepeatStart for the first bit. To illustrate this, consider the following example. If the stplStart is 14, and stplRepeat is 5, and stplRepeatStart is 4, then we shall paint the 14th bit in the stipple pattern once, before moving on to the 15th, i.e. the last bit in the stipple pattern. If both bit 14 and 15th are set, and the 0th stipple bit is nor set, then the quad line segment will have a length of 6. In a prefeπed embodiment of the present invention, depth gradients, line slopes, depth offsets, x-direction widths (xhw), and y-direction widths (yhw) are common to all stipple quads if a line segment, and therefore need to be generated only once.
Line segments are converted by Trigonometric Functions and Quadrilateral Generation Units, described in greater detail below (see sections 5.2.5.X and 5.2.5.X, respectively) into quadrilaterals, or "quads." For antialiased lines the quads are rectangles. For non-antialiased lines the quads are parallelograms.
5.4.3 Point Preprocessing
Referring to FIG. 13, there is shown an example of an unclipped circle 5 intersecting parts of a tile 15, for illustrating the various data to be determined. CYT 20 represents circle's 5 topmost point, clipped by tile's 15 top edge, in tile coordinates. CYB 30 represents circle's 10 bottom most point, clipped by tile's 15 bottom edge, in tile coordinates .Yoffset 25 represents the distance between CYT 20 and CYB 30, the bottom of the unclipped circle 10. X0 35 represents the "x" coordinate of the center 5 of circle 10, in window coordinates. This information is required and used by cull 410 to determine which sample points are covered by the point.
This required information for points is obtained with the following calculations:
V0 = (X0, Y0, Z0) (the center of the circle and the Zmin);
Yτ = Y0 + width/2;
Y B = γo " width/2;
DYT = Yτ - bot (convert to tile coordinates);
DYB = YB - bot (convert to tile coordinates); YTGtTop = DYT >= 'dl 6 (check the msb);
YBLtBot = DYT< 'd0 (check the sign); if (YTGtTop) then CYT = tiletop, else CYT = [DYT]8bits (in tile coordinates); if (YBLtBot) then, CYB = tilebot, else CYB = [DYB]8bits (in tile coordinates); and,
Yoffset = CYT - DYB.
5.4.4 Trigonometric Functions Unit
As discussed above, setup 215 converts all lines, including line triangles and points, into quadrilaterals. To accomplish this, the trigonometric function unit 3 (see
FIG. 8) calculates a x-direction half-width and a y-direction half-width for each line and point. (Quadrilateral generation for filled triangles is discussed in greater detail above in section 5.4.1). Procedures for generating vertices for line and point quadrilaterals are discussed in greater detail below in section 5.4.5.
Before trigonometric unit 3 can determine a primitive's half- width, it must first calculate the trigonometric functions tan θ, cos θ, sin θ. In a prefeπed embodiment of the present invention, setup 215 determines the trigonometric functions cos θ and sin θ using the line's slope that was calculated in the line preprocessing functional unit described in great detail above. For example:
Figure imgf000038_0001
In yet another embodiment of the present invention the above discussed trigonometric functions are calculated using lookup table and iteration method, similar to rsqrt and other complex math functions. Rsqrt stands for the reciprocal square root. Referring to FIG. 14, there is shown an example of the relationship between the orientation of a line and the sign of the resulting cos θ and sin θ. As is illustrated, the signs of the resulting cos θ and sin θ will depend on the orientation of the line.
We will now describe how setup 215 uses the above determined cos θ and sin θ to calculate a primitive's "x" direction half-width ("HWX") and a primitive's "y" direction half width ( "HWY"). For each line, the line's half width is offset distance in the x and y directions from the center of the line to what will be a quadrilateral's edges. For each point, the half width is equal to one-half of the point's width. These half-width's are magnitudes, meaning that the x-direction half-widths and the y- direction half- width's are always positive. For purposes of illustration, refer to FIG. 15, where there is shown three lines, an antialiased line 1405, a non-aliased x-major line 1410, and a non-aliased y-major line 1415, and their respective associated quadrilaterals, 1420, 1425, and 1430. Each quadrilateral 1420, 1425 and 1430 has a width ("W"), for example, W 1408, W1413, and W 1418. In a prefeπed embodiment of the present invention, this width "W" is contained in a primitive packet 6000 (see table 6). (Also, refer to FIG. 16, where there are shown examples of x-major and -major aliased lines in comparison to an antialiased line.).
To determine an anti-aliased line's half width, setup 215 uses the following equations:
W
2 '
W HWY = — \ cos6> \
2 To determine the half width for an x-major, non-anti-aliased line, setup 215 uses the following equations:
HWX = 0
W HWY = —
2
To determine the half width for a y-major, non-anti-aliased line, setup 215 uses the following equations:
HWX = ~ w
2
HWY = 0
To determine the half- width for a point, setup 215 uses the following equations:
W HWX = —
2 W 2
5.4.5 Quadrilateral Generation Unit
Quadrilateral generation unit 4 (see FIG. 8): (1) generates a quadrilateral centered around a line or a point; and, (2) sorts a set of vertices for the quadrilateral with respect to a quadrilateral's top vertex, bottom vertex, left vertex, and right vertex. With respect to quadrilaterals, quadrilateral generation unit 4 converts anti-aliased lines into rectangles; (b) converts non-anti-aliased lines into parallelograms; and, (c) converts aliased points into squares centered around the point. (For filled triangles, the vertices are just passed through to the next functional unit, for example, clipping unit 5 (see FIG. 8)). We now discuss an embodiment of a procedure that quadrilateral generation unit 4 takes to generate a quadrilateral for a primitive. 5.4.5.1. Line Segments
With respect to line segments, a quadrilateral's vertices are generated by taking into consideration: (a) a line segment's original vertices (a primitive's original vertices are sent to setup 215 in a primitive packet 6000, see table 6, WindowXO 19, WindowYO 20, WindowZO 21, WindowXl 14, WindowYl 15, WindowZl 16, WindowX2 9, Window Y2 10, and, WindowZ2 11); (b) a line segment's orientation (line orientation is determined and discussed in greater detail above in section 5.2.5.2.1); and, (c) a line segment's x-direction half-width and y-direction half-width (half-widths are calculated and discussed in greater detail above in section 5.2.5.4). In particular, a quadrilateral vertices are generated by adding, or subtracting, a line segment's half-widths with respect to the line segment's original vertices.
If a line segment is pointing to the right (Xcnt > 0) and the line segment is pointing up (Yxnt > 0) then setup 215 performs the following set of equations to determine a set of vertices defining a quadrilateral centered on the line segment:
QY0 = 70 - HWY QX0 = X0 + HWX
QY\ = Y0 + HWY QX\ = X0 - HWX
QY2 = Y\ - HWY QX2 = X\ + HWX
QY3 = Y\ + HWY j and QX3 - X1 - HWX where:
QV0, VQV1, QV2, and QV3 are a quadrilateral vertices. The quadrilateral vertices are, as of yet un-sorted, but the equations were chosen, such that they can easily be sorted based on values of Ycnt and Xcnt. To illustrate this please refer to FIG. 17, illustrating aspects of pre-sorted vertex assignments for quadrilaterals according to an embodiment of the present invention. In particular, quadrilateral 1605 delineates a line segment that points right and up, having vertices QV0 1606, QV1 1607, QV2 1608, and QV3 1609.
If a line segment is pointing to the left (Xcnt < 0) and the line segment is pointing up, then setup 215 performs the following set of equations to determine set of vertices defining a quadrilateral centered on the line segment: QYO = Y0 + HWY QX0 = XQ - HWX QYI = Y0 - HWY QX\ = X0 + HWX QY2 = Y\ + HWY QX2 = X\ - HWX QY3 = 71 - HWY and QX3 = Xl + HWX
To illustrate this, consider that quadrilateral 1610 delineates a line segment that points left and up, having vertices QVO 1611, QVl 1612, QV2 1613, and QV3 1614.
If a line segment is pointing to the left (Xcnt < 0) and the line segment is pointing down (Ycnt < 0), then setup 215 performs the following set of equations to determine a set of vertices defining a quadrilateral centered on the line segment:
QY0 = Y0 + HWY QX0 = X0 + HWX
QY\ = Y0 - HWY QXI = X0 - HWX
QY2 = Yl + HWY QX2 = Xl + HWX
Q ^Y3 = Y\ - HWY , and , Q ^X3 = X\ - HWX
To illustrate this, consider that quadrilateral 1615 delineates a line segment that points left and down, having vertices QVO 1616, QVl 1617, QV2 1618, and QV3 1619.
If a line segment is pointing right and the line segment is pointing down, then setup 215 performs the following set of equations to determine a set of vertices defining a quadrilateral centered on the line segment: QY0 = Y0-HWY QX0 = X0 - HWX
QYI = Y0 + HWY QX\ = X + HWX
QY2 = Y\ -HWY QX2 = X\ - HWX
QY3 = Y\ + HWY md QX3 = X\ + HWX
To illustrate this, consider that quadrilateral 1620 delineates a line segment that points right and down, having vertices QVO 1621, QVl 1622, QV2 1623, and QV3 1624. In a prefeπed embodiment of the present invention, a vertical line segment is treated as the line segment is pointing to the left and top. A horizontal line segment is treated as if it is pointing right and up.
These vertices, QXO, QX1, QX2, QX3, QYO, QY1, QY2, AND QY3, for each quadrilateral are now reassigned to top (QXT, QYT, QZT), bottom (QXB, QYB, QZB), left (QXL, QYL, QZL), and right vertices (QXR, QYR, QZR) by quadrilateral generation functional unit 4 to give the quadrilateral the proper orientation to sort their vertices so as to identify the top list, bottom, left, and right most vertices, where the Z- coordinate of each vertex is the original Z-coordinate of the primitive. To accomplish this goal, quadrilateral generation unit 4 uses the following logic. If a line segment is pointing up, then the top and bottom vertices are assigned according to the following equations: (a) vertices (QXT, QYT, QZT) are set to respectively equal (QX3, QY3, Zl); and, (b) vertices (QXB, QYB, QZB) are set to respectively equal (QXO, QYO, Z0). If a line segment is pointing down, then the top and bottom vertices are assigned according to the following equations: (a) vertices (QXT, QYT, QZT) are set to respectively equal (QXO, QYO, Z0); and, (b) vertices (QXB, QYB, QZB) are set to respectively equal (QX3, QY3, Zl).
If a line segment is pointing right, then the left and right vertices are assigned according to the following equations: (a) vertices (QXL, QYL, QZL) are set to respectively equal (QX1, QY1, Z0); and, vertices (QXR, QYR, QZR) are set to respectively equal (QX2, QY2, Zl). Finally, if a line segment is pointing left, the left and right vertices are assigned according to the following equations: (a) vertices (QXL, QYL, QZL) are set to respectively equal (QX2, QY2, Zl); and, (b) vertices (QXR, QYR, QZR) are set to respectively equal (QX1 , QY1 , Z0).
5.4.1.2 Aliased Points
An aliased point is treated as a special case, meaning that it is treated as if it were a vertical line segment. 5.4.6 Clipping Unit
For purposes of the present invention, clipping a polygon to a tile can be defined as finding the area of intersection between a polygon and a tile. The clip points are the vertices of this area of intersection. To find a tight bounding box that encloses parts of a primitive that intersect a particular tile, and to facilitate a subsequent determination of the primitive's minimum depth value (Zmin), clipping unit 5 (see FIG. 8), for each edge of a tile: (1) selects a tile edge from a tile (each tile has four edges), to determine which, if any of a quadrilateral's edges, or three triangle edges, cross the tile edge; (b) checks a clip codes (discussed in greater detail below) with respect to the selected edge; (c) computes the two intersection points (if any) of a quad edge or a triangle edge with the selected tile edge; (d) compare computed intersection points to tile boundaries to determine validity and updates the clip points if appropriate.
The "current tile" is the tile cuπently being set up for cull 410 by setup 215. As discussed in greater detail above, a previous stage of pipeline 200, for example, sort 320, sorts each primitive in a frame with respect to those regions, or tiles of a window (the window is divided into multiple tiles) that are touched by the primitive. These primitives were sent in a tile-by-tile order to setup 215. It can be appreciated, that with respect to clipping unit 5, setup 215 can select an edge in an arbitrary manner, as long as each edge is eventually selected. For example, in one embodiment of clipping unit 5 can first select a tile's top edge, next the tile's right edge, next the tile's bottom edge, and finally the tiles left edge. In yet another embodiment of clipping unit 5, the tile edges may be selected in a different order.
Sort 320 (see FIG. 3) provides setup 215 the x-coordinate (TileXLocation) for the current tile's left tile edge, and the y-coordinate (TileXLocation) for the bottom right tile edge via a begin tile packet (see table 2). For purposes of this description, the tile's x-coordinate is refeπed to as "tile x," and the tiles y-coordinate is refeπed to as "tile y." To identify a coordinate location for each edge of the current tile, clipping unit 5 sets the left edge of tile equal to tile x, which means that left tile edge x- coordinate is equal to tile x + 0. The cuπent tile's right edge is set to equal the tiles left edge plus the width of the tile. The cuπent tile's bottom edges set to equal tile y, which means that this y-coordinate is equal to tile y + 0. Finally, the tile's top edge is set to equal and the bottom tile edge plus the height of the tile in pixels. In a prefeπed embodiment of the present invention, the width and height of a tile is 16 pixels. However, and yet other embodiments of the present invention, the dimensions of the tile can be any convenient size.
5.4.6.1 Clip Codes
Clip codes are used to determine which edges of a polygon, if any, touch the cuπent tile. (A previous stage of pipeline 200 has sorted each primitive with respect to those tiles of a 2-D window that each respective primitive touches.). In one embodiment of the present invention, clip codes are Boolean values, wherein "0" represents false and "1" represents tme. A clip code value of false indicates that a primitive does not need to be clipped with respect to the edge of the cuπent tile that particular clip code represents. Whereas, a value of tme indicates that a primitive does need to be clipped with respect to the edge of the cuπent tile that that particular clip code represents. To illustrate how one embodiment of the present invention determines clip codes for a primitive with respect to the cuπent tile, consider the following pseudocode, wherein there is shown a procedure for determining clip codes. As noted above, the pseudocode used is, essentially, a computer language using universal computer language conventions. While the pseudocode employed here has been invented solely for the purposes of this description, it is designed to be easily understandable by any computer programmer skilled in the art.
In one embodiment of the present invention, clip codes are obtained as follows for each of a primitives vertices. C[i] = ((v[i].y > tile_ymax) « 3)|| ((v[i].x < tile_xmin) « 2)|| ((v[i].y < tile_ymin) « 1)|| (v[i].x > tile_xmax) ), where, for each vertex of a primitive: (a) C[i] represents a respective clip code; (b) v[i].y represents a y vertex; (c) tile_ymax represents the maximum y-coordinate of the current tile; (d) v[i].x represents an x vertex of the primitive; (e) tile_xmin represents the minimum x- coordinate of the cuπent tile; (f) tile_ymin represents the minimum y-coordinates of the cuπent tile; and, (g) tile_xmax represents the maximum x-coordinate of the cuπent tile. In this manner, the boolean values coπesponding to the clip codes are produced. In yet another embodiment of the present invention, clip codes are obtained using the following set of equations: (1) in case of quads then use the following mapping, where "Q" represents a quadrilaterals respective coordinates, and TileRht, TileLft, TileTop and TileBot respectively represent the x-coordinate of a right tile edge, the x-coordinate of a left tile edge, the y-coordinate of a top tile edge, and the y- coordinate of a bottom tile edge.
(XO, Y0) = (QXBot, QYBot); (X1.Y1) = (QXLft, QYLft); (X2.Y2) = (QXRht, QYRht); (X3,Y3) = (QXTop, QYTop);
//left
ClpFlagL[3:0] = {(X3 <= TileLft), (X2 <= TileLft), (XI <= TileLft), (XO <= TileLft)} //right
ClpFlagR[3:0] = {(X3 >= TileRht), (X2 >= TileRht), (XI >= TileRht), (XO >= TileRht)}
// down ClpFlagD[3:0] = {(Y3 <= TileBot), (Y2 <= TileBot), (Yl <= TileBot), (Y0 <=
TileBot)}
// up
ClpFlagU[3:0] = {(Y3 >= TileTop), (Y2 >= TileTop), (Yl >= TileTop), (Y0 >=
TileTop)}
(ClpFlag[3] for triangles is don't care.). ClpFlagL[l] asserted means that vertex 1 is clipped by the left edge of the tile (the vertices have already been sorted by the quad generation unit 4, see FIG. 8 ). ClpFlagR[2] asserted means that vertex2 is clipped by right edge of tile, and the like. Here are "clipped" means that the vertex lies outside of the tile.
5.4.6.2 Clipping Points
After using the clip codes to determine that a primitive intersects the boundaries of the cuπent tile, clipping unit 5 clips the primitive to the tile by determining the values of nine possible clipping points. A clipping point is a vertex of a new polygon formed by clipping (finding area of intersection) the initial polygon by the boundaries of the cuπent tile. There are nine possible clipping points because there are eight distinct locations were a polygon might intersect a tile's edge. For triangles only, there is an internal clipping point which equals y-sorted VtxMid. Of these nine possible clipping points, at most, eight of them can be valid at any one time. For purposes of simplifying the discussion of clipping points in this specification, the following acronyms are adopted to represent each respective clipping point: (1) clipping on the top tile edge yields left (PTL) and right (PTR) clip vertices;
(b) clipping on the bottom tile edge is performed identically to that on the top tile edge. Bottom edge clipping yields the bottom left (PBL) and bottom right (PBR) clip vertices; (c) clipping vertices sorted with respect to the x-coordinate yields left high/top (PLT) and left low/bottom (PLB) vertices; (d) clipping vertices sorted with respect to the y-coordinate yields right high/ top (PRT) and right low/bottom (PRB); and, (e) vertices that lie inside the tile are assigned to an internal clipping point (PI).
Referring to FIG. 18, there is illustrated clipping points for two polygons, a rectangle
10 and a triangle 10 intersecting respective tiles 15 and 25.
5.4.6.3 Validation of Clipping Points
Clipping unit 5 (see FIG. 8) now validates each of the computed clipping points, making sure that the coordinates of each clipping point are within the coordinate space of the cuπent tile. For example, points that intersect the top tile edge may be such that they are both to the left of the tile. In this case, the intersection points are marked invalid.
In a prefeπed embodiment of the present invention, each clip point has an x- coordinate, a y-coordinate, and a one bit valid flag. Setting the flag to "0" indicates that the x-coordinate and the y-coordinate are not valid. If the intersection with the edge is such that one or both off a tile's edge comers (such comers were discussed in greater detail above in section are included in the intersection, then newly generated intersection points are valid.
A primitive is discarded if none of its clipping points are found to be valid. The pseudo-code for an algorithm for determining clipping points according to one embodiment of the present invention, is illustrated below:
Notation Note: P = (X, Y), eg. PT = (XT, YT); Line(Pl,P0) means the line formed by endpoints PI and P0; // Sort the Clip Flags in X
XsortClpFlagL[3:0] = LftC & RhtC ? ClpFlagL[3:0] :
ClpFlagL[XsortMidSrc,XsortRhtSrc,XsortLftSrc,XsortMidSrc], where indices of clip flags 3:0 refeπed to vertices. In particular. 0 represents bottom; 1 represents left; 2 represents right; and 3 represents top. For example, ClipFlagL[2] refers to time order vertex 2 is clipped by left edge. XsortClipFlagL[2] refers to right most vertex.
XsortClpFlagR[3:0] = LftC & RhtC ? ClpFlagR[3:0] : ClpFlagR[XsortMidSrc,XsortRhtSrc,XsortLftSrc,XsortMidSrc] XsortClpFlagD[3:0] = LftC & RhtC ? ClpFlagD[3:0] : ClpFlagD[XsortMidSrc,XsortRhtSrc,XsortLftSrc,XsortMidSrc] XsortClpFlagU[3:0] = LftC & RhtC ? ClpFlagU[3:0] : ClpFlagU[XsortMidSrc,XsortRhtSrc,XsortLftSrc,XsortMidSrc]
// Sort the Clip Flags in Y
YsortClpFlagL[3:0] - LftC & RhtC ? ClpFlagL[3:0] : ClpFlagL[YsortTopSrc,YsortMidSrc,YsoriMidSιc, YsortBotSrc] YsortClpFlagR[3:0] = LftC & RhtC ? ClpFlagR[3:0] : ClpFlagR[YsortTopSrc,YsortMidSrc,YsortMidSrc,YsortBotSrc] YsortClpFlagD[3:0] = LftC & RhtC ? ClpFlagD[3:0] : ClpFlagD[YsortTopSrcNsortMidSrc,YsortMidSrc, YsortBotSrc] YsortClpFlagU[3:0] = LftC & RhtC ? ClpFlagU[3:0] : ClpFlagU[YsortTopSrc,YsortMidSrc,YsortMidSrc,YsortBotSrc]
// Pass #1 Clip to Left Tile edge using X-sorted primitive
// For LeftBottom: check clipping flags, dereference vertices and slopes If (XsortClipL[0]) // bot vertex clipped by TileLeft) Then
Pref= (quad) ? P2
BotC ? XsortRhtSrc-mux(P0, PI, P2) TopC ? XsortRhtSrc-mux(P0, P 1 , P2) - 4o -
Slope = (quad)? SL : BotC ? XsortSBTopC ? XsortSB Else
Pref= (quad) ? P0 :
BotC ? XsortMidSrc®mux(PO, P 1 , P2) TopC ? XsortRhtSrc
Slope = (quad) ? SR :
BotC ? XsortSL TopC ? XsortSB Endlf
YLB = Yref + slope * (TileLeft - Xref)
// For LeftBottom: calculate intersection point, clamp, and check validity IntYLB = (XsortClpFlgL[ 1 ]) ? Yref + slope * (TileLeft - Xref) : XsortLftSrc→mux(YO, Yl , Y2)
Clip YLB = (intYLB < TileBot) ? TileBot :
IntXBL ValidYLB = (intYBL <= TileTop)
//For LeftTop: check clipping flags, dereference vertices and slopes If (XsortClpFlagL[3]) // Top vertex clipped by TileLeft)
Then
Pref= (quad) ? P2 : BotC ? XsortRhtSrc→mux(P0, Pl, P2): TopC ? XsortRhtSrc→mux(P0, PI , P2): Slope = (quad) ? SR :
BotC ? XsortST TopC ? XsortST Else
Pref = (quad) ? P3 :
BotC ? XsortRhtSrc→mux(P0, PI , P2) TopC ? XsortMidSrc-mux(P0, P 1 , P2) Slope = (quad) ? SL :
BotC ? XsortST : TopC ? XsortSL Endlf
YLT = Yref + slope * (TileLeft - Xref)
// For LeftTop: calculate intersection point, clamp, and check validity IntYLT - (XsortClpFlgL[l]) ? Yref + slope * (TileLeft - Xref) XsortLftSrc→mux(YO, Yl , Y2)
Clip YLT - (intYLT > TileTop) ? TileTop :
IntYLT ValidYLT = (intYLT >= TileBot)
// The X Left coordinate is shared by the YLB and YLT ClipXL = (XsortClpFlgl[l ]) ? TileLeft :
XsortLftSrc→mux(XO, XI, X2) ValidClipLft = ValidYLB & ValidYLT
// Pass #2 Clip to Right Tile edge using X-sorted primitive
//For RightBot: check clipping flags, dereference vertices and slopes
If (XsortClpFlagR[0]) //Bot vertex clipped by TileRight
Then Pref= (quad) ? P0 :
BotC ? XsortMidSrc→mux(PO, P 1 , P2)
TopC ? XsortRhtSrc→mux(PO, PI , P2)
Slope = (quad) ? SR :
BotC ? XsortSL TopC ? XsortSB
Else
Pref = (quad) ? P2 :
BotC ? XsorfRhtSrc→mux(P0, P 1 , P2) TopC ? XsortRhtSrc→mux(P0, Pl, P2) Slope = (quad) ? SL :
BotC ? XsortSB TopC ? XsortSB Endlf
// For RightBot: calculate intersection point, clamp, and check validity IntYRB = (XsortClpFlgR[2]) ? Yref + slope * (TileRight - Xref) :
XsortRhtSrc→mux(YO, Yl, Y2) Clip YRB = (intYRB < TileBot) ? TileBot :
IntYRB ValidYRB = (intYRB <= TileTop)
//For RightTop: check clipping flags, dereference vertices and slopes If (XsortClpFlagR[3]) // Top vertex clipped by TileRight
Then
Pref = (quad) ? P3 :
BotC ? XsortRhtSrc-*mux(P0, Pl, P2)
TopC ? XsortMidSrc→mux(PO, PI , P2) Slope = (quad) ? SL :
BotC ? XsortST : TopC ? XsortSL Else
Pref= (quad) ? P2 : BotC ? XsortRhtSrc→mux(P0, Pl, P2) Tope ? XsortRhtSrc→mux(PO, PI , P2) Slope = (quad) ? SR :
BotC ? XsortST TopC ? XsortST Endlf
YRT = Yref + slope * (TileRight - Xref)
// For RightTop: calculate intersection point, clamp, and check validity IntYRT = (XsortClpFlgR[2]) ? Yref + slope * (TileRight - Xref)
XsortRhtSrc→mux(YO, Yl, Y2)
ClipYRT = (intYRT > TileTop) ? TileTop :
IntYRT Valid YRT = (intYRT >= TileBot)
// The X right coordinate is shared by the YRB and YRT ClipXR = (XsortClpFlgR[2]) ? TileRight :
XsortRhtSrc→mux(XO, XI, X2) ValidClipRht - ValidYRB & ValidYRT
// Pass #3 Clip to Bottom Tile edge using Y-sorted primitive
// For BottomLeft: check clipping flags, dereference vertices and slopes If (YsortClpFlagD[ 1 ]) // Left vertex clipped by TileBot)
Then
Pref= (quad) ? P3 :
LeftC ? YsortTopSrc-mux(P0, Pl, P2) RhtC ? YsortTopSrc→muxCPO, P 1 , P2) Slope = (quad) ? SNL :
LeftC ? YsortSNL RightC ? YsortSNL
Else
Pref = (quad) ? PI : LeftC ? YsortMidSrc→mux(P0, Pl, P2)
RhtC ? YsortTopSrc→mux(P0, Pl, P2) Slope = (quad) ? SNR :
LeftC ? YsortSNB RightC ? YsortSNL Endlf
// For BottomLeft: calculate intersection point, clamp, and check validity IntXBL = (YsortClpFlgD[0]) ? Xref + slope * (TileBot - Yref) : YsortBotSrc→mux(XO, XI, X2) ClipXBL = (intXBL < TileLeft) ? TileLeft :
IntXBL
ValidXBL = (intXBL <= TileRight)
//For BotRight: check clipping flags, dereference vertices and slopes If (YsortClpFlagD[2]) // Right vertex clipped by TileBot)
Then
Pref= (quad) ? P3 : LeftC ? YsoftTopSrc-mux(P0, Pl, P2) RhtC ? YsoftTopSrc-mux(PO, PI , P2) Slope = (quad) ? SNR :
LeftC ? YsortSNR RightC ? YsortSNR Else
Pref= (quad) ? P2 :
LeftC ? YsortTopSrc→mux(P0, Pl, P2) RhtC ? YsortMidSrc→mux(PO, P 1 , P2) Slope = (quad) ? SNL : LeftC ? YsortSNR :
RightC ? YsortSNB
Endlf
// For BotRight: calculate intersection point, clamp, and check validity IntXBR = (YsortClpFlgD[0]) ? Xref + slope * (TileBot - Yref)
YsortBotSrc→mux(XO, XI, X2) ClipXBR = (intXBR > TileRight) ? TileRight :
IntXTR ValidXBR = (intXBR >= TileLeft)
// The Y bot coordinate is shared by the XBL and XBR ClipYB = (YsortClpFlgD[0]) ? TileBot :
YsortBotSrc-* mux(Y0, Yl, Y2) ValidClipBot = ValidXBL & ValidXBR
// Pass #4 Clip to Top Tile edge using Y-sorted primitive
//For TopLeft: check clipping flags, dereference vertices and slopes If (ClpFlagU[l ]) //Left vertex clipped by TileTop
Then
Pref= (quad) ? P1 : LftC ? YsortMidSrc→mux(P0, Pl, P2) RhtC ? YsortTopSrc→mux(PO, P 1 , P2)
Slope = (quad) ? SNR :
LeftC ? YsortSNB RightC ? YsortSNL Else
Pref = (quad) ? P3 :
LftC ? YsortTopSrc→mux(P0, Pl, P2) RhtC ? YsortTopSrc-mux(P0, Pl, P2) Slope - (quad) ? SNL : LeftC ? YsortSNL
RightC ? YsortSNL
Endlf
// For topleft: calculate intersection point, clamp, and check validity IntXTL = (YsortClpFlgU[3]) ? Xref + slope * (TileTop - Yref)
YsortTopSrc→mux(X0, XI, X2)
ClipXTL = (intXTL < TileLeft) ? TileLeft :
IntXTL
ValidXTL = (intXTL <= TileRight)
//For TopRight: check clipping flags, dereference vertices and slopes
If (YsortClpFlagU[2]) // Right vertex clipped by TileTop
Then Pref= (quad) ? P2 :
LftC ? YsortTopSrc-mux(PO, PI , P2) RhtC ? YsortMidSrc→mux(PO, PI , P2)
Slope = (quad) ? SNL : LeftC ? YsortSNR :
RightC ? YsortSNB
Else
Pref= (quad) ? P3 :
LftC ? YsoftTopSrc→mux(PO, PI , P2) RhtC ? YsoftTopSrc→mux(P0, Pl, P2)
Slope = (quad) ? SNR :
LeftC ? YsortSNR : RightC ? YsortSNR
Endlf
// For TopRight: calculate intersection point, clamp, and check validity IntXTR = (YsortClpFlgU[3]) ? Xref + slope * (TileTop - Yref)
YsortTopSrc→mux(X0, XI, X2) ClipXTR = (intXTR > TileRight) ? TileRight : IntXTR
Valid XTR = (intXTR >= TileLeft)
// The Y top coordinate is shared by the XTL and XTR ClipYT = (YsortClpFlgU[3]) ? TileTop : YsortTopSrc-mux(YO, Yl , Y2)
ValidClipTop = ValidXTL & ValidXTR
The 8 clipping points identifed so far can identify points clipped by the edge of the tile and also extreme vertices (ie topmost, bottommost, leftmost or rightmost) that are inside of the tile. One more clipping point is needed to identify a vertex that is inside the tile but is not at an extremity of the polygon (ie the vertex called VM)
// Identify Internal Vertex (ClipXI, ClipYI) = YsortMidSrc-* mux(P0, PI, P2)
ClipM = XsortMidSrc- mux(ClipO, Clipl, Clip2)
ValidClipI = ! (ClpFlgL[YsortMidSrc]) & ! (ClpFlgR[ YsortMidSrc])
& !(ClpFlgD[YsortMidSrc]) & !(ClpFlgU[YsortMidSrc])
Geometric Data Required By Cull 410
Furthermore, some of the geometric data required by Cull Unit is determined here.
Geometric data required by cull: CullXTL and CullXTR. These are the X intercepts of the polygon with the line of the top edge of the tile. They are different from the PTL and PTR in that PTL and PTR must be within or at the tile boundaries, while CullXTL and CullXTR may be right or left of the tile boundaries. If YT lies below the top edge of the tile then
CullXTL-CullXTR=XT. CullYTLR : the Y coordinate shared by CullXTL and CullXTR
(CullXL, CullYL) : equal to PL, unless YL lies above the top edge. In which case, it equals (CullXTL , CullYTLR)
(CullXR, CullYR) : equal to PR, unless YR lies above the top edge. In which case, it equals (CullXTR , CullYTLR)
// CullXTL and CullXTR (clamped to window range)
CullXTL - (IntXTL < MIN) ?MLN : IntXTL
CullXTR - (IntXTR > MAX) ?MAX :IntXTR
// (CullXL, CullYL) and (CullXR, CullYR)
VtxRht = (quad) ?P2 : YsortMidSrc→mux(P0, P 1 , P2)
VtxLft = (quad) ?P1 : YsortMidSrc→mux(P0, PI, P2)
(CullXL, CullYL)temp = (YsortClipL clipped by TileTop) ?(IntXTL, IntYT) NtxLft (CullXL, CullYL) = (CullXLtemp < MIN) ? (ClipXL, Clip YLB) :CullXLtemp
(CullXR, CullYR)temp = (YsortClipR clipped by TileTop) ?(IntXTR, IntYT) NtxRht (CullXR, CullYR) = (CullXRtemp > MAX) ?(ClipXR, ClipYRB) :CullXRtemp // Determine Cull Slopes
CullSR, CullSL, CullSB = cvt (YsortSNR, YsortSNL, YsortSNB)
5.4.6.4 Quadrilateral Vertices Outside of Window With wide lines on tiles at the edge of the window, it is possible that one or more of the calculated vertices (see section 5.4.5.1) may lie outside of the window range. Setup can handle this by carrying 2 bits of extra coordinate range, one to allow for negative values, one to increase the magnitude range. In a preferred embodimant of the present invention, the range and precision of the data sent to cull 410 (14.2 for x coordinates) is just enough to define the points inside the window range. The data cull 410 gets from setup 215 includes the left and right comer points. In cases where a quad vertex falls outside of the window range, Setup 215 will pass the following values to cull 410: (1) If tRight.x is right of the window range then clamp to right window edge; (2) If tLeft.x is left of window range then clamp to left window edge; (3) If v[ VtxRightC]. x is right of window range then send vertex rLow (that is, lower clip point on the right tile edge as the right comer); and, (4) If v[ VtxLeftC]. x is left of window range then send lLow (that is, the lower clip point on the left tile edge as the left comer). This is illustrated in FIG. 19, where there is shown an example of processing quadrilateral vertices outside of a window. (Fig 18 coπesponds to Figure 51 in United States Provisional Patent Application Serial Number 60/097,336). FIG. 22 illustrates aspects of clip code vertex assignment.
Note that triangles are clipped to the valid window range by a previous stage of pipeline 200, for example, geometry 310. Setup 215, in the cuπent context, is only concerned with quads generated for wide lines. Cull 410 (see FIG. 4) needs to detect overflow and underflow when it calculates the span end points during the rasterization, because out of range x values may be caused during edge walking. If an overflow or underflow occurs then the x-range should be clamped to within the tile range.
We now have determined a primitive's intersection points (clipping points) with respect to the cuπent tile, and we have determined the clip codes, or valid flags. We can now proceed to computation of bounding box, a minimum depth value (Zmin), and a reference stamp, each of which will be described in greater detail below. 5.4.7 Bounding Box
The bounding box is the smallest box that can be drawn around the clipped polygon.
The bounding box of the primitive intersection is determined by examining the clipped vertices (clipped vertices, or clipping points are described in greater detail above). We use these points to compute dimensions for a bounding box.
The dimensions of of the bounding box are identified by BXL (the left most of valid clip points), BXR (the right most of valid clip points), BYT (the top most of valid clip points), BYB (the bottom most of valid clip points) in stamps. Here, stamp refers to the resolution we want to determine the bounding box to.
Finally, setup 215 identifies the smallest Y (the bottom most y-coordinate of a clip polygon). This smallest Y is required by cull 410 for its edge walking algorithm.
To illustrate a procedure, according to one embodiment of present invention, we now describe pseudocode for determining such dimensions of a bounding box. The valid flags for the clip points are as follows: ValidClipL (needs that clip points PLT and PLB are valid), ValidClipR, ValidClipT, and ValidClipB, coπespond to the clip codes described in greater detail above in reference to clipping unit 5 (see FIG. 8). "PLT" refers to "point left, top." PLT and (ClipXL, ClipyLT) are the same.
BXLtemp = min valid(ClipXTL, ClipXBL); BXL - ValidClipL ? ClipXL : BXLtemp;
BXRtemp = max valid(ClipXTR, ClipXBR); BXR = ValidClipR ? ClipXR : BXRtemp;
BYTtemp = max valid(ClipYLT, Clip YRT); BYT - ValidClipT ? ClipYT : BYTtemp;
BYBtemp = min valid(ClipYLB, ClipYRB); BYB = ValidClipB ? ClipYB : BYBtemp;
CullYB = trunc(BYB)subρixels (CullYB is the smallest Y value); //expressed in subpixels — 8x8 subpixels = 1 pixel; 2x2 pixels = 1 stamp.
We now have the coordinates that describe a bounding box that circumscribes those parts of a primitive that intersect the cuπent tile. These xmin (BXL), xmax (BXR), ymin (BYB), ymax (BYT) are in screen relative pixel coordinates and need to be converted to the tile relative stamp coordinates.
Screen relative coordinates can describe a 2048 by 2048 pixel screen. As discussed above, in a prefeπed embodiment of the present invention, tiles are only 16 by 16 pixels in size. By expressing coordinates as tile relative, we can save having to store many bits by converting from screen coordinates to tile relative coordinates. Converting from screen coordinates to tile relative coordinates is simply to ignore (or truncated) the most significant bits. To illustrate this, consider the example: it takes 11 bits to describe 2048 pixels, whereas it takes only 4 bits to describe 16 pixels. discarding the top 7 bits will yield a tile relative value. We now illustrate a set of equations for converting x-coordinates and y-coordinates from screen based values to tile relative values.
This can be accomplished by first converting the coordinates to tile relative values and then considering the high three bits only (i.e. shift right by 1 bit). This works; except when xmax (and/or ymax) is at the edge of the tile. In that case, we decrement the xmax (and/or ymax) by 1 unit before shifting.
// The Bounding box is expressed in stamps
BYT = trunc(BYT - 1 subpixel)stamp; BYB = trunc(BYB)stamp; BXL = trunc(BXL)stamp; and, BXR = trunc(BXR - 1 subpixel)stamp.
5.4.8 Depth Gradients and Depth Offset Unit The object of this functional unit is to: Calculate Depth Gradients Zx = dz/dx and Zy = dz/dy Calculate Depth Offset O, which will be applied in the Zmin & Zref subunit
Determine if triangle is x major or y major
Calculate the ZslopeMjr (z gradient along the major edge)
Determine ZslopeMnr (z gradient along the minor axis)
In the case of triangles, the input vertices are the time-ordered triangle vertices (XO, Y0, Z0), (XI, Yl, Zl), (X2, Y2, Z2). For lines, the input vertices are 3 of the quad vertices produced by Quad Gen (QXB, QYB, ZB), (QXL, QYL, ZL), (QXR, QYR, ZR). In case of stipple lines, the Z partials are calculated once (for the original line) and saved and reused for each stippled line segment.
In the case of line mode triangles, an initial pass through this subunit is taken to calculate the depth offset, which will be saved and applied to each of the triangle's edges in subsequent passes. The Depth Offset is calculated only for filled and line mode triangles and only if the depth offset calculation is enabled.
5.4.8.1 Depth Gradients
The vertices are first sorted before being inserted in to the equation to calculate depth gradients. For triangles, the sorting information is was obtained in the triangle preprocessing unit described in greater detail above. (The information is contained in the pointers YsortTopSrc, YsortMidSrc, and YsortBotSrc). For quads, the vertices are already sorted by Quadrilateral Generation unit 4 described in greater detail above. Note: Sorting the vertices is desirable so that changing the input vertex ordering will not change the results. We now describe pseudocode for sorting the vertices:
If triangles:
X'O = YsortBotSrc→mux(x2,xl,xO); Y'O = YsortBotSrc-* mux(y2,yl,y0);
X'l = YsortMidSrc-* mux(x2,xl,x0); Y'O = YsorMidSrc-*mux(y2,yl,yO);
X'2 = YsortTopSrc-* mux(x2,xl,x0); Y'O = YsortTopSrc→mux(y2,yl,yO)
To illustrate the above notation, consider the following example where X' = ptr->mux(x2, xl, xO) means: if ptr = 001, then X' = xO; if ptr = 010, then X' = xl; and, if ptr = 100, then X' = x2. If Quads:
X'O = QXB Y'O = QYB
X'1 = QXL Y'1 = QYL
X'2 = QXR Y'2 = QYR
The partial derivatives represent the depth gradient for the polygon. They are given by the following equation:
Figure imgf000060_0001
ά (J 'I - JC'O)(/2 - /o) - (x'2 - J 'C OΛ - 0)
_ & _ (JC'I - Λ:'O)(2'2 - z'o) - (x'2 - xO)(z - z'o) όy (x - xO)(y<2 - y' ) - {x ι - c'o)( i - y'o)
5.4.8.2 Depth Offset 7 (see FIG. 8)
The depth offset for triangles (both line mode and filled) is defined by OpenGL® as: O = M * factor + Res * units, w here:
M= max( |ZX|, |ZY|) of the triangle; Factor is a parameter supplied by the user;
Res is a constant; and, Units is a parameter supplied by the user.
The "Res*units" term has already been added to all the Z values by a previous stage of pipeline 200, for example, geometry Geometry 310. So Setup's 215 depth offset component becomes: O = M * factor * 8 Clamp O to lie in the range ( - 224, +224)
The multiply by 8 is required to maintain the units. The depth offset will be added to the Z values when they are computed for Zmin and Zref later. In case of line mode triangles, the depth offset is calculated once and saved and applied to each of the subsequent triangle edges. 5.4.8.2.1 Determine X major for triangles
In the following unit (Zref and Zmin Subunit) Z values are computed using an "edge- walking" algorithm. This algorithm requires information regarding the orientation of the triangle, which is determined here.
YT = YsortToρSrc→mux(y2,yl,y0); YB = YsortBotSrc-* mux(y2,yl,y0); XR = XsortRhtSrc-*mux(x2,xl,x0); XL = XsortLftSrc-*mux(x2,xl,xO); DeltaYTB = YT - YB; DeltaXRL = XR - XL;
If triangle:
Xmajor = |DeltaXRL| >= |DeltaYTB|
If quad
Xmajor = value of Xmajor as determined for lines in the TLP subunit.
An x-major line is defined in OpenGL® specification. In setup 215, an x- major line is determined early, but conceptually may be determined anywhere it is convenient.
5.4.8.2.2 Compute ZslopeMjr and ZslopeMnr
(Z min and Z ref SubUnit) are the ZslopeMjr (Z derivative along the major edge), and ZslopeMnr (the Z gradient along the minor axis). Some definitions: (a) Xmajor Triangle: If the triangle spans greater or equal distance in the x dimension than the y dimension, then it is an Xmajor triangle, else it is a Ymajor triangle; (b) Xmajor Line: if the axis of the line spans greater or equal distance in the x dimension than the y dimension, then it is an Xmajor line, else it is a Ymajor line; (c) Major Edge (also known as Long edge). For Xmajor triangles, it is the edge connecting the Leftmost and Rightmost vertices. For Ymajor triangles, it is the edge connecting the Topmost and Bottommost vertices. For Lines, it is the axis of the line. Note that although, we often refer to the Major edge as the "long edge" it is not necessarily the longest edge. It is the edge that spans the greatest distance along either the x or y dimension; and, (d) Minor Axis: If the triangle or line is Xmajor, then the the minor axis is the y axis. If the triangle or line is Ymajor, then the minor axis is the x axis. To compute ZslopeMjr and ZslopeMnr: If Xmajor Triangle:
ZslopeMjr = (ZL - ZR) / (XL - XR) ZslopeMnr = ZY
If Ymajor Triangle: ZslopeMjr = (ZT - ZB) / (YT - YB) ZslopeMnr = ZX
If Xmajor Line & (xCntUp=yCntUρ)
ZslopeMjr = (QZR - QZB) / (QXR - QXB) ZslopeMnr = ZY
If Xmajor Line & (xCntUp != yCntUp)
ZslopeMjr = (QZL - QZB) / (QXL - QXB) ZslopeMnr = ZY If Ymajor Line & (xCntUp==yCntUp)
ZslopeMjr = (QZR - QZB) / (QYR - QYB) ZslopeMnr = ZX
If Ymajor Line & (xCntUp != yCntUp)
ZslopeMjr = (QZL - QZB) / (QYL - QYB) ZslopeMnr = ZX
5.4.8.2.3 Special Case for Large Depth Gradients
It is possible for triangles to generate arbitrarily large values of Dz/Dx and Dz Dy. Values that are too large present two problems caused by fixed point data paths and eπors magnified by a large size of a depth gradient.
In a preferred embodiment of the present invention, because cull 410 has a fixed point datapath that is capable of handling Dz/Dx and Dz/Dy of no wider than 35b. These 35b are used to specify a value that is designated T27.7 ( a two's complement number that has a magnitude of 27 integer bits and 7 fractional bits) Hence, the magnitude of the depth gradients must be less than 2Λ27.
As mentioned above, computation of Z at any given (X,Y) coordinate would be subject to large eπors, if the depth gradients were large. In such a situation, even a small error in X or Y will be magnified by the depth gradient. Therefore, in a preferred embodiment of the present invention, the following is done in case of large depth gradients, where GRMAX is the threshold for the largest allowable depth gradient (it is set via the auxiliary ring — determined and set via software executing on, for example, computer 101— see FIG. 1:
If ( (|Dz/Dx| > GRMAX) or (|Dz/Dy| > GRMAX) ) Then
If Xmajor Triangle or Xmajor Line Set ZslopeMnr = 0; Set Dz/Dx = ZslopeMjr;
Set Dz/Dy = 0; If Ymaj or Triangle or Ymaj or Line
Set ZslopeMnr = 0; Set Dz/Dx = 0; and, Set Dz/Dy = ZslopeMjr.
5.4.8.2.4 Discarding Edge-On Triangles
Edge-on triangles are detected in depth gradient unit 7 (see FIG. 8). Whenever the Dz/Dx or Dz/Dy is infinite (overflows) the triangle is invalidated. However, edge-on line mode triangles are not discarded. Each of the visible edges are to be rendered. In a prefeπed embodiment of the present invention the depth offset (if turned on) for such a triangle will however overflow, and be clamped to +/- 2Λ24.
5.4.8.2.5 Infinite dx dy
An infinite dx/dy implies that an edge is perfectly horizontal. In the case of horizontal edges, one of the two end-points has got to be a comer vertex (VtxLeftC or VtxRightC). With a primitive whose coordinates lie within the window range, Cull 410 (see FIG. 4) will not make use of an infinite slope. This is because with Cull's 410 edge walking algorithm, it will be able to tell from the y value of the left and/or right comer vertices that it has turned a comer and that it will not need to walk along the horizontal edge at all. In this case, Cull's 410 edge walking will need a slope. Since the start point for edge walking is at the very edge of the window, any X that edge walking calculates with a coπectly signed slope will cause an overflow (or underflow) and X will simply be clamped back to the window edge. So it is actually unimportant what value of slope it uses as long as it is of the coπect sign.
A value of infinity is also a don't care for setup's 215 own usage of slopes. Setup uses slopes to calculate intercepts of primitive edges with tile edges. The equation for calculating the intercept is of the form X = XQ + _Y * dx dy. In this case, a dx dy of infinity necessarily implies a _Y of zero. If the implementation is such that zero plus any number equals zero, then dx dy is a don't care.
Setup 215 calculates slopes internally in floating point format. The floating point units will assert an infinity flag should an infinite result occur. Because Setup doesn't care about infinite slopes, and Cull 410 doesn't care about the magnitude of infinite slopes, but does care about the sign, setup 215 doesn't need to express infinity. To save the trouble of determining the correct sign, setup 215 forces an infinite slope to ZERO before it passes it onto Cull 410.
5.4.9 Z min and Z ref
We now compute minimum z value for the intersection of the primitive with the tile. The object of this subunit is to: (a) select the 3 possible locations where the minimum Z value may be; (b) calculate the Z's at these 3 points, applying a coπection bias if needed; (c) sSelect he minimum Z value of the polygon within the tile; (d) use the stamp center nearest the location of the minimum Z value as the reference stamp location; (e) compute the Zref value; and, (f) apply the Z offset value.
There are possibly 9 valid clipping points as determined by the Clipping subunit. The minimum Z value will be at one of these points. Note that depth computation is an expensive operation, and therefore is desirable to minimize the number of depth computations that need to be carried out. Without pre-computing any Z values, it is possible to reduce the 9 possible locations to 3 possible Z min locations by checking the signs of ZX and ZY (the signs of the partial z derivatives in x and y). Clipping points (XminO, YminO, Valid), (Xminl, Yminl, Valid), (Xmin2, Ymin2, Valid) are the 3 candidate Zmin locations and their valid bits. It is possible that some of these are invalid. It is desirable to remove invalid clipping points from consideration. To accomplish this, setup 215 locates the tile comer that would correspond to a minimum depth value if the primitive completely covered the tile. Once setup 215 has determined that tile comer, then setup 215 need only to compute the depth value at the two nearest clipped points.
These two values along with the z value at vertex il (Clip Point PI) provide us with the three possible minimum z values. Possible clip points are PTL, PTR, PLT, PLB, PRT, PRB, PBR, PBL, and PI (the depth value of PI is always depth value of y- sorted middle (ysortMid)). The three possible depth value candidates must be compared to determine the smallest depth value and its location. We now know the minimum z value and the clip vertex it is obtained from. In a prefeπed embodiment of the present mentioned, Z-value is clamped to 24 bits before sending to Cull 410. To to illustrate the above, refeπed to the pseudocode below for identifying those clipping point that are minimum depth value candidates:
Notational Note:
ClipTL = (ClipXTL, ClipYT, ValidClipT), ClipLT = (ClipXL, YLT, ValidClipL) , etc
If (ZX>0) &(ZY>0) // Min Z is toward the bottom left Then (XminO, YminO) = ValidClipL ? ClipLB
ValidClipT ? ClipTL : ClipRB
ZminOValid = ValidClipL | ValidClipT | ValidClipR
(Xminl, Yminl) = ValidClipB ? ClipBL ValidClipR ? ClipRB : ClipTL
Zminl Valid = ValidClipL | ValidClipB | ValidClipT
(Xmin2, Ymin2) = Clipl Zmin2Valid = (PrimType = Triangle)
If (ZXX>) & (ZY<0) // Min Z is toward the top left Then
(XminO, YminO) = ValidClipL ? ClipLT ValidClipB ? ClipBL : ClipRT ZminOValid = ValidClipL | ValidClipB | ValidClipR
(Xminl, Yminl) = ValidClipT ? ClipTL
ValidClipR ? ClipRT : ClipBL Zminl Valid = ValidClipT | ValidClipR | ValidClipB
(Xmin2, Ymin2) = Clipl
Zmin2Valid = (PrimType = Triangle)
If (ZX<0) & (ZY>0) // Min Z is toward the bottom right Then (XminO, YminO) = ValidClipR ? ClipRB ValidClipT ? ClipTR : ClipLB ZminOValid = ValidClipR | ValidClipT | ValidClipL
(Xminl, Yminl) = ValidClipB ? ClipBR ValidClipL ? ClipLB : ClipTR Zminl Valid = ValidClipB | ValidClipL | ValidClipT
(Xmin2, Ymin2) = Clipl Zmin2Valid = (PrimType = Triangle)
If (ZX<0) & (ZY<0) // Min Z is toward the top right Then (XminO, YminO) = ValidClipR ? ClipRT ValidClipB ? ClipBR : ClipLT ZminOValid = ValidClipR | ValidClipB | ValidClipL (Xminl, Yminl) = ValidClipT ? ClipTR ValidClipL ? ClipLT
: ClipBR Zminl Valid = ValidClipT | ValidClipL | ValidClipB
(Xmin2, Ymin2) = Clipl
Zmin2Valid = (PrimType = Triangle)
Referring to FIG. 20, there is shown in example of Zmin candidates.
5.4.9.1 The Z Calculation Algorithm
The following algorithm's path of computation stays within a triangle and will produce intermediate Z values that are within the range of 2Λ24 (this equation will not cause from massive cancellation due to use of limited precision floating point units). For a Y major triangle:
Zdest = + (Ydest - Ytop) * ZslopeMjr
(1)
+ (Xdest - ((Ydest - Ytop) * DX/Dylong + Xtop)) * ZslopeMnr
(2) + Ztop
(3)
+ offset
(4)
Line (1) represents the change in Z as you walk along the long edge down to the appropriate Y coordinate. Line (2) is the change in Z as you walk in from the long edge to the destination X coordinate.
For an X major triangle the equation is analogous: Zdest = + (Xdest - Xright) * ZslopeMjr
(1)
+ (Ydest - ((Xdest - Xright) * Dy/Dxlong + Yright)) * ZslopeMnr
(2) + Ztop
(3)
+ offset
(4)
For dealing with large values of depth gradient, the values specified in special case for large depth gradients (discussed in greater detail above) are used.
5.4.9.2 Compute Z's for Zmin candidates
The 3 candidate Zmin locations have been identified (discussed above in greater detail). Remember that a flag needs to be carried to indicate whether each Zmin candidate is valid or not.
Compute : If Ymaj or triangle :
ZminO = + (YminO - Ytop) * ZslopeMjr + (XminO - ((YminO - Ytop) *
DX/Dylong + Xtop)) * ZslopeMnr (note that Ztop and offset are NOT yet added).
If Xmajor triangle: ZminO = + (XminO - Xright) * ZslopeMjr + (YminO - ((XminO - Xright) * DX/Dylong + Xtop)) * ZslopeMnr (note that Zright and offset are NOT yet added).
A coπection to the zmin value may need to be applied if the xminO or yminO is equal to a tile edge. Because of the limited precision math units used, the value of intercepts (computed above while calculating intersections and determining clipping points) have an eπor less than +/- 1/16 of a pixel. To guarantee then that we compute a Zmin that is less than what would be the infinitely precise Zmin, we apply a Bias to the zmin that we compute here.
If xminO is on a tile edge, subtract |dZ/dY|/16 from zminO; If yminO is on a tile edge, subtract |dZ/dX|/16 from zminl ; If xminO and yminO are on a tile comer, don't subtract anything; and, If neither xminO nor yminO are on a tile edge, don't subtract anything. The same equations are used to compute Zminl and Zmin2
5.4.9.3 Determine Zmin
The mimmum valid value of the three Zmin candidates is the Tile's Zmin. The stamp whose center is nearest the location of the Zmin is the reference stamp.The pseudocode for selecting the Zmin is as follows:
ZminTmp = (Zminl < ZminO) & Zminl Valid | IZminOValid ? Zminl : ZminO; ZminTmp Valid = (Zminl < ZminO) & Zminl Valid | '.ZminOValid ? ZminlValid : ZminOValid; and, Zmin = (ZminTmp < Zmin2) & ZminTmp Valid | !Zmin2 Valid ? ZminTmp : Zmin2.
The x and y coordinates coπesponding to each ZminO, Zminl and Zmin2 are also sorted in parallel along with the determination of Zmin. So when Zmin is determined, there is also a coπesponding xmin and ymin.
5.4.10 Reference Stamp and Z ref Instead of passing Z values for each vertex of the primitive to cull 410, Setup passes a single Z value, representing the Z value at a specific point within the primitive. Setup chooses a reference stamp that contains the vertex with the minimum z. The reference stamp is the stamp the center is closest to the location of Zmin has determined in section 5.4.9.3. (Coordinates are called xmin, ymin.). That stamp center is found by truncating the xmin and ymin values to the nearest even value. For vertices on the right edge, the x-coordinates are decremented and for the top edge the y-coordinate is decremented before the reference stamp is computed to ensure choosing a stamp center that is within tile boundaries.
Logic Used to Identify the Reference Stamp
The reference Z value, "Zref ' is calculated at the center of the reference stamp. Setup 215 identifies the reference stamp with a pair of 3 bit values, xRefStamp and yRefStamp, that specify its location in the Tile. Note that the reference stamp is identified as an offset in stamps from the comer of the Tile. To get an offset in screen space, the number of subpixels in a stamp are multiplied. For example: x = x tile coordinate multiplied by the number of pixels in the width of a tile plus xrefstamp multiplied by two. This gives us an x-coordinate in pixels in screen space. The reference stamp must touch the clipped polygon. To ensure this, choose the center of stamp nearest the location of the Zmin to be the reference stamp. In the Zmin selection and sorting, keep track of the vertex coordinates that were ultimately chosen. Call this point (Xmin, Ymin).
If Zmin is located on rht tile edge, then clamp Xmin = tileLft+7 stamps If Zmin is located on top tile edge, then clamp: Ymin = tileBot + 7stamps;
Xref = trunc(Xmin) stamp + lpixel (truncate to snap to stamp resolution); and, Yref = trunc(Ymin)stamp + lpixel (add lpixel to move to stamp center).
Calculate Zref using an analogous equation to the zMin calculations.
Compute: If Ymajor triangle:
Zref = + (Yref- Ytop) * ZslopeMjr + (Xref- ((Yref - Ytop) * DX/Dylong + Xtop)) * ZslopeMnr (note that Ztop and offset are NOT yet added).
If Xmajor triangle: Zref = + (Xref - Xright) * ZslopeMjr + (Yref - ((Xref - Xright) * DX/Dylong + Xtop)) * ZslopeMnr (note that Zright and offset are NOT yet added).
5.4.10.1 Apply Depth Offset
The Zmin and Zref calculated thus far still need further Z components added. If Xmajor:
(a) Zmin = Zmin + Ztop + Zoffset; (b) Clamp Zmin to lie within range (-2Λ24, 2Λ24); and (c) Zref = Zref + Ztop + Zoffset.
If Ymajor: (a) Zmin = Zmin + Zright + Zoffset; - oy -
(b) clamp Zmin to lie within range (-2A24, 2A24); and,
(c) Zref = Zref + Zright + Zoffset.
5.4.11 X and Y coordinates passed to Cull 410 Setup calculates Quad vertices with extended range, (si 2.5 pixels). In cases where a quad vertex does fall outside of the window range, Setup will pass the following values to Cull 410:
If XTopR is right of window range then clamp to right window edge If XTopL is left of window range then clamp to left window edge - If XrightC is right of window range then pick RightBot Clip Point
If XleftC is left of window range then pick LeftBot Clip Point Ybot is always the min Y of the Clip Points
Referring to FIG. 21, there are shown example of out of range quad vertices.
5.4.11.1 Title Relative X-coordinates and Y-coordinates
Sort 320 sends screen relative values to setup 215. Setup 215 does most calculations in screen relative space. Setup 215 then converts results to tile relative space for cull 410. Cull 410 culls primitives using these coordinates. The present invention is a tiled architecture. Both this invention and the mid-pipeline cull unit 410 is novel. Cull 410 requires a new type of information that is not calculated by conventional setup units. For example, consider the last 21 elements in setup output primitive packet 6000 (see table 6). Some of these elements are tile relative which helps efficiency of subsequent processing stages of pipeline 200.
Table 1 Example of begin frame packet 1000
BeginFramePacket parameter bits/packet Starting bit Source Destination/Value
Header 5 send unit
Block3DPipe 1 0 SW BKE
WinSource 8 1 SW BKE
WinSource 8 9 SW BKE
WinTarget 8 17 SW BKE duplicate wi
WinTargetR 8 25 SW BKE duplicate wi
WinXOffset 8 33 SW BKE tiles are du:
WinYOffset 12 1 SW BKE
PixelFormat 2 53 SW BKE
SrcColorKeyEnable3D 55 SW BKE
DestColorKeyEnable3D 56 SW BKE
NoColorBuffer 57 SW PIX, BKE
NoSavedColorBuffer 58 SW PIX.BKE
NoDepthBuffer 59 SW PIX, BKE
NoSavedDepthBuffer 60 SW PIX. BKE
NoStencilBuffer 61 SW PIX, BKE
NoSavedStencilBuffer 62 SW PIX. BKE
Stencil ode 63 SW PIX
DepthOutSelect 2 6 SW PIX
ColorOutSβlect 2 66 SW PIX
ColorOutOverflowSelect 2 68 SW PIX
PixelsVert 11 70 SW SRT.BKE
PixelsHoriz 11 81 SW SRT
SuperTileSize 2 92 SW SRT
SuperTileStep 1 θ SW SRT
SortTranspMode 108 SW SRT, CUL
DrawFrontLeft 109 SW SRT
DrawFrontRight 110 SW SRT
DrawBackLeft 111 SW SRT
DrawBackRight 112 SW SRT
StencilFirst 113 SW SRT
BreakPointFrame 11 SW SRT
120
Table 2 Example of begin tile packet 2000
BeginTilePacket parameter bits/packet Starting bit Source Destination
PktType 5 0
FirstTilelnFrame 1 0 SRT STP to BKE
BreakPointTile 1 1 SRT STP to BKE
TileRight 1 2 SRT BKE
TileFront 1 3 SRT BKE
" TileXLocation 7 4 SRT STP,CUL,PIX,BKE
(* - *-^ - 1 He YLocation 7 11 SRT STP,CUL,PlX,BKE
TileRepeat 1 18 SRT CUL
TileBeginSubFrame 1 19 SRT CUL
BeginSuperTile 1 20 SRT STP to BKE for pert coui
OverflowFrame 1 21 SRT PIX.BKE
WriteTileZS 1 22 SRT BKE
BackendClearColor 1 23 SRT PIX, BKE
BackendClearDepth 1 24 SRT CUL, PIX, BKE
BackendClearStencil 1 25 SRT PIX.BKE
ClearColorVaiue 32 26 SRT PIX
ClearDepthValue 24 58 SRT CUL, PIX
ClearStencilValue 8 82 SRT PIX
95
Table 3
Example of clear packet 3000
Srt2StpClear parameter bits/packet Starting bit Source Destination/Value
Header 5 0
PrxβlModelndex 4 0
ClearColor 1 4 SW CUL, PIX
ClearDepth 1 5 SW CUL, PIX
ClearStencti 1 6 SW CUL, PIX
ClearColorValue 32 7 SW SRT.PIX
ClearOepthValue 24 39 SW SRT.CUL, PIX
ClearStencilValue 8 63 SW SRT, PIX
SendToPixel 1 71 SW SRT, CUL 72
ColorAddress 23 72 MEX MIJ
ColorOffset 8 95 MEX MIJ
ColorType 2 103 MEX MIJ
ColorSize 2 105 MEX MIJ
112
Table 4 Example of cull packet 4000
parameter bits/packet Starting Bit Source Destination
SrtOutPktType 5 SRT STP
CullFlushAII 1 0 SW CUL reserved 1 1 SW CUL OffsetFactor 24 2 SW STP
31
Table 5 Example of end frame packet 5000
EndFramePacket parameter bits/packet Starting bit Source Destination/Value
__ _ _
InterruptNumber 6 0 SW BKE
SoftEndFrame 1 6 SW MEX
BufferOverflowOccurred 1 7 MEX MEX.SRT
13
Table 6 Example of primitive packet 6000
parameter bits/packet Starting Address Source Destination
SrtOutPktType 5 0 SRT STP
ColorAddress 23 5 MEX MIJ
ColorOffset 8 28 MEX MIJ
ColorType 2 36 MEX MIJ, STP
ColorSize 2 38 MEX MIJ
LinePointWidth 3 40 MEX STP .PIX
Figure imgf000077_0001
Table 7 Example of setup output primitive packet 7000
Parameter Bits Starting bit Source Destination Comments
StpOutPktType STP CUL
ColorAddress 23 0 MEX MIJ
ColorOffset 8 23 MEX MIJ
ColorType 2 31 MEX MIJ 0= strip 1 = fan 2 = line 3=poιnt These 6 bits of colortype. colorsize, and
ColorSize 2 33 MEX MIJ colorEdgeld are encoded as EESSTT
ColorEdgeld 2 35 STP CUL 0 = filled, 1 = v0v1, 2 = v1v2, 3= v2v0
LinePointWidth 3 37 GEO CUL
Multisample 40 SRT CUL, FRG.PIX
CullFlushOverlap 41 GEO CUL
DoAlphaTest 42 GEO CUL
OoABIend 43 GEO CUL
DepthFunc 3 44 SW CUL
DepthTestEnable 47 SW CUL
OepthMask 48 SW CUL z partial along x, T27 7 (set to zero for points) dZdx 35 49 STP CUL y, T27 7 (set to zero for points) dZdy 35 84 STP z partial along
CUL
1 => triangle 2 => line, and 3=> point This is in addition to ColorType and ColorEdgelD This is incorporated so that CUL does not have to decode
PπmType 119 STP CUL ColorType STP creates unified packets for tπangles and lines But they may have different aliasing state So CUL needs to know whether the packet is point, line, or tπangle
LeftValid 121 STP CUL LeftCorner valid? (dont care for points) RightValid 122 STP CUL RightCorner valid? (don't care for points)
Left and πght intersects with top tile edge Also contain xCenter for point Note that these points are
XleftTop 24 123 STP CUL used to start edge walking on the left and right edge respectively So these may actually be outside the edges of the tile (11 13)
XπghtTop 24 147 STP CUL
YLRTop 8 171 STP CUL Bbox Ymax Tile relative 5 3 x window coordinate of the left co er (unsigned
XleftComer 24 179 STP CUL fixed point 11 13) (don't care for points) tile-relative y coordinate of left co er (unsigned
YleftComer 8 203 STP CUL 5 3) (don't care for points) x window coordinate of the πght comer, unsigned
XπghtComer 24 211 STP CUL fixed point 11 13 (dont care for points) tile-relative y coordinate of πght comer 5 3, also
YπghtComer 8 235 STP CUL contains Yoffset for point
YBot 8 243 STP CUL Bbox Ymin Tile relative 5 3 slope of the left edge T14 9 (don't care for points)
DxDyLeft 24 251 STP CUL slope of the πght edge, T14 9 (don't care for points)
DxDyRight 24 275 STP CUL slope of the bottom edge, T14 9 (dont care for
DxDyBot 24 299 STP CUL points)
XrefSta p 3 323 STP CUL ref stamp x index on tile (set to zero for points)
YrefSta p 3 326 STP CUL ref stamp y index on tile (set to zero for points)
ZRβfTile 32 329 STP CUL Ref z value, s28 3
XmaxStamp 3 361 STP CUL Bbox max stamp x index
XrninStamp 3 364 STP CUL Bbox mm stamp x index
YmaxStamp 3 367 STP CUL Bbox mm stamp y index
YminSta p 3 370 STP CUL Bbox max stamp y index
ZminTile 24 373 STP CUL m z of the pπm on tile 402 Table of Contents
-90-
3 Summary of the Invention - 6
4 Brief Description of the Drawings - 7
5. Detailed Description of Preferred Embodiments of the Invention - 9 5.1 System Overview - 9
5.1.1 Other Processing Stages 210 - 11
5.1.2 Other Processing Stages 220 - 12 5.2 Setup 215 Overview - 13
5.2.1 Interface T/0 With Other Processing Stages of the Pipeline - 16
5.2.1.1 Sort 320 Setup 215 Interface - 16
5.2.1.2 Setup 215 Cull 410 Interface - 16
5.2.2 Setup Primitives - 16
5.2.2.1 Polygons - 16
5.2.2.2 Lines - 17
5.2.2.3 Points - 17 5.3 Unified Primitive Description - 17 5.4 High Level Functional Unit Architecture - 21
5.4.1 Triangle Preprocessing - 23
5.4.1.1 Sort With Respect to the Y Axis - 23
5.4.1.2 Slope Determination - 26
5.4.1.3 Determine Y-sorted Left Comer or Right Comer - 27
5.4.1.4 Sort Coordinates With Respect to the X Axis - 28
5.4.1.5 Determine X Sorted Top Comer or Bottom Comer and Identify Slopes - 30 5.4.2 Line Segment Preprocessing - 30
5.4.2.1 Line Orientation - 31
5.4.2.2 Line Slopes - 32
5.4.2.3 Line Mode Triangles - 33
5.4.2.4 Stippled Line Processing - 33
5.4.4 Trigonometric Functions Unit - 35
5.4.5 Quadrilateral Generation - 37 5.4.5.1. Line Segments - 38 5.4.1.2 Aliased Points - 40
5.4.6 Clipping Unit - 41
5.4.6.1 Clip Codes - 42
5.4.6.2 Clipping Points - 43 5.4.6.3 Validation of Clipping Points - 44
5.4.6.4 Quadrilateral Vertices Outside of Window
- 54
5.4.7 Bounding Bny - 55
5.4.8 Depth Gradients and Depth Offset Unit - 56 5.4.8.1 Depth Gradients - 57
5.4.8.2 Depth Offset - 58
5.4.8.2.1 Determine X major for triangles - 59
5.4.8.2.2 Compute ZslopeMjr and ZslopeMnr - 59
5.4.8.2.3 Special Case for Large Depth Gradients - 60
5.4.8.2.4 Discarding Edge-On Triangles - 61
5.4.8.2.5 Infinite dx/dy - 61
5.4.9 7. min and 7 ref - 62
5.4.9.3 Determine Zmin - 67
5.4.10 Reference Stamp and 7. ref - 67 5.4.10.1 Apply Depth Offset - 68
5.4.1 1 X and Y coordinates passed tn Cull 41 - 69 5.4.11.1 Title Relative X-coordinates and Y-coordinates
- 69 -
6.0 Claims - 70
7.0 Ahstract of the Disclosure - 72
Table 1 - 73
Table 2 - 74
Table 3 - 75
Table 4 - 76
Table 5 - 77
Table 6 - 78 ■
Table 7 - 79 •

Claims

6.0 ClaimsWHAT IS CLAIMED IS:
1. In a tile based 3-D graphics pipeline, a system for post tile sorting setup, comprising: a mid-pipeline setup unit, adapted to:
(a) receive image data from a previous stage of the graphics pipeline, the image data comprising vertices describing a primitive, the image data having already been sorted with respect to a tile in a 2-D window, the window having been divided into a plurality of tiles;
(b) compute set of vertices defining an area of intersection between the primitive and the tile; and,
(c) calculate a minimum depth value for that part of the primitive intersecting the tile.
2. In a tile based 3-D graphics pipeline, a system for post tile sorting setup, comprising: a mid-pipeline setup unit, adapted to: (a) receive image data from a previous stage of the graphics pipeline, the image data comprising vertices describing a primitive, wherein the x-coordinates are screen based and the y-coordinates are tile based, the image data having already been sorted with respect to a tile in a 2-D window, the window having been divided into a plurality of tiles; (b) determine a set of clipping points defining an area of intersection between the primitive and the tile; and,
(c) compute a minimum depth value for that part of the primitive intersecting the tile.
3. In a 3-D graphics pipeline, a system for uniformly representing primitives as quadrilaterals, comprising: - o 1 - a mid-pipeline primitive preprocessing unit adapted to represent a line segment and a triangle as a rectangle, wherein both the line segment and the triangle are described with a respective set of four vertices, and wherein not all of the vertices of the respective set of four vertices are needed to describe the triangle.
PCT/US1999/019240 1998-08-20 1999-08-20 Apparatus and method for performing setup operations in a 3-d graphics pipeline using unified primitive descriptors WO2000011562A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US9733698P 1998-08-20 1998-08-20
US60/097,336 1998-08-20

Publications (2)

Publication Number Publication Date
WO2000011562A1 true WO2000011562A1 (en) 2000-03-02
WO2000011562B1 WO2000011562B1 (en) 2000-05-04

Family

ID=22262858

Family Applications (5)

Application Number Title Priority Date Filing Date
PCT/US1999/019191 WO2000011607A1 (en) 1998-08-20 1999-08-20 Deferred shading graphics pipeline processor
PCT/US1999/019192 WO2000011602A2 (en) 1998-08-20 1999-08-20 Method and apparatus for generating texture
PCT/US1999/019263 WO2000010372A2 (en) 1998-08-20 1999-08-20 System, apparatus and method for spatially sorting image data in a three-dimensional graphics pipeline
PCT/US1999/019200 WO2000011603A2 (en) 1998-08-20 1999-08-20 Graphics processor with pipeline state storage and retrieval
PCT/US1999/019240 WO2000011562A1 (en) 1998-08-20 1999-08-20 Apparatus and method for performing setup operations in a 3-d graphics pipeline using unified primitive descriptors

Family Applications Before (4)

Application Number Title Priority Date Filing Date
PCT/US1999/019191 WO2000011607A1 (en) 1998-08-20 1999-08-20 Deferred shading graphics pipeline processor
PCT/US1999/019192 WO2000011602A2 (en) 1998-08-20 1999-08-20 Method and apparatus for generating texture
PCT/US1999/019263 WO2000010372A2 (en) 1998-08-20 1999-08-20 System, apparatus and method for spatially sorting image data in a three-dimensional graphics pipeline
PCT/US1999/019200 WO2000011603A2 (en) 1998-08-20 1999-08-20 Graphics processor with pipeline state storage and retrieval

Country Status (3)

Country Link
US (11) US6552723B1 (en)
AU (4) AU5688199A (en)
WO (5) WO2000011607A1 (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6532013B1 (en) 2000-05-31 2003-03-11 Nvidia Corporation System, method and article of manufacture for pixel shaders for programmable shading
US6664963B1 (en) 2000-05-31 2003-12-16 Nvidia Corporation System, method and computer program product for programmable shading using pixel shaders
US6690372B2 (en) 2000-05-31 2004-02-10 Nvidia Corporation System, method and article of manufacture for shadow mapping
US6697064B1 (en) 2001-06-08 2004-02-24 Nvidia Corporation System, method and computer program product for matrix tracking during vertex processing in a graphics pipeline
US6704025B1 (en) 2001-08-31 2004-03-09 Nvidia Corporation System and method for dual-depth shadow-mapping
US6734861B1 (en) 2000-05-31 2004-05-11 Nvidia Corporation System, method and article of manufacture for an interlock module in a computer graphics processing pipeline
US6778181B1 (en) 2000-12-07 2004-08-17 Nvidia Corporation Graphics processing system having a virtual texturing array
US6844880B1 (en) 1999-12-06 2005-01-18 Nvidia Corporation System, method and computer program product for an improved programmable vertex processing model with instruction set
US6870540B1 (en) * 1999-12-06 2005-03-22 Nvidia Corporation System, method and computer program product for a programmable pixel processing model with instruction set
US7006101B1 (en) 2001-06-08 2006-02-28 Nvidia Corporation Graphics API with branching capabilities
US7009615B1 (en) 2001-11-30 2006-03-07 Nvidia Corporation Floating point buffer system and method for use during programmable fragment processing in a graphics pipeline
US7009605B2 (en) 2002-03-20 2006-03-07 Nvidia Corporation System, method and computer program product for generating a shader program
US7023437B1 (en) 1998-07-22 2006-04-04 Nvidia Corporation System and method for accelerating graphics processing using a post-geometry data stream during multiple-pass rendering
US7162716B2 (en) 2001-06-08 2007-01-09 Nvidia Corporation Software emulator for optimizing application-programmable vertex processing
US7170513B1 (en) 1998-07-22 2007-01-30 Nvidia Corporation System and method for display list occlusion branching
US7209140B1 (en) 1999-12-06 2007-04-24 Nvidia Corporation System, method and article of manufacture for a programmable vertex processing model with instruction set
US7286133B2 (en) 2001-06-08 2007-10-23 Nvidia Corporation System, method and computer program product for programmable fragment processing
US7456838B1 (en) 2001-06-08 2008-11-25 Nvidia Corporation System and method for converting a vertex program to a binary format capable of being executed by a hardware graphics pipeline
CN102835119A (en) * 2010-04-01 2012-12-19 英特尔公司 A multi-core processor supporting real-time 3D image rendering on an autostereoscopic display

Families Citing this family (616)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6300956B1 (en) * 1998-03-17 2001-10-09 Pixar Animation Stochastic level of detail in computer animation
US6631423B1 (en) * 1998-03-31 2003-10-07 Hewlett-Packard Development Company, L.P. System and method for assessing performance optimizations in a graphics system
US7375727B1 (en) * 1998-07-22 2008-05-20 Nvidia Corporation System, method and computer program product for geometrically transforming geometric objects
US6480205B1 (en) 1998-07-22 2002-11-12 Nvidia Corporation Method and apparatus for occlusion culling in graphics systems
WO2000011607A1 (en) 1998-08-20 2000-03-02 Apple Computer, Inc. Deferred shading graphics pipeline processor
US6771264B1 (en) 1998-08-20 2004-08-03 Apple Computer, Inc. Method and apparatus for performing tangent space lighting and bump mapping in a deferred shading graphics processor
US7047391B2 (en) * 1998-09-14 2006-05-16 The Massachusetts Institute Of Technology System and method for re-ordering memory references for access to memory
GB2343603B (en) * 1998-11-06 2003-04-02 Videologic Ltd Shading 3-dimensional computer generated images
GB2343601B (en) * 1998-11-06 2002-11-27 Videologic Ltd Shading and texturing 3-dimensional computer generated images
US6417858B1 (en) * 1998-12-23 2002-07-09 Microsoft Corporation Processor for geometry transformations and lighting calculations
US6445386B1 (en) * 1999-01-15 2002-09-03 Intel Corporation Method and apparatus for stretch blitting using a 3D pipeline
US6362825B1 (en) * 1999-01-19 2002-03-26 Hewlett-Packard Company Real-time combination of adjacent identical primitive data sets in a graphics call sequence
US6469704B1 (en) * 1999-01-19 2002-10-22 Hewlett-Packard Company System and method for combined execution of graphics primitive data sets
US6732259B1 (en) 1999-07-30 2004-05-04 Mips Technologies, Inc. Processor having a conditional branch extension of an instruction set architecture
US7061500B1 (en) * 1999-06-09 2006-06-13 3Dlabs Inc., Ltd. Direct-mapped texture caching with concise tags
US6831636B1 (en) * 1999-06-29 2004-12-14 International Business Machines Corporation System and process for level of detail selection based on approximate visibility estimation
US6714197B1 (en) 1999-07-30 2004-03-30 Mips Technologies, Inc. Processor having an arithmetic extension of an instruction set architecture
US6631392B1 (en) 1999-07-30 2003-10-07 Mips Technologies, Inc. Method and apparatus for predicting floating-point exceptions
US6912559B1 (en) 1999-07-30 2005-06-28 Mips Technologies, Inc. System and method for improving the accuracy of reciprocal square root operations performed by a floating-point unit
US6697832B1 (en) 1999-07-30 2004-02-24 Mips Technologies, Inc. Floating-point processor with improved intermediate result handling
US7346643B1 (en) 1999-07-30 2008-03-18 Mips Technologies, Inc. Processor with improved accuracy for multiply-add operations
US6384833B1 (en) * 1999-08-10 2002-05-07 International Business Machines Corporation Method and parallelizing geometric processing in a graphics rendering pipeline
US6628836B1 (en) * 1999-10-05 2003-09-30 Hewlett-Packard Development Company, L.P. Sort middle, screen space, graphics geometry compression through redundancy elimination
US6476808B1 (en) * 1999-10-14 2002-11-05 S3 Graphics Co., Ltd. Token-based buffer system and method for a geometry pipeline in three-dimensional graphics
US6882642B1 (en) * 1999-10-14 2005-04-19 Nokia, Inc. Method and apparatus for input rate regulation associated with a packet processing pipeline
US6396502B1 (en) * 1999-10-15 2002-05-28 Hewlett-Packard Company System and method for implementing accumulation buffer operations in texture mapping hardware
US6876991B1 (en) 1999-11-08 2005-04-05 Collaborative Decision Platforms, Llc. System, method and computer program product for a collaborative decision platform
US6650325B1 (en) * 1999-12-06 2003-11-18 Nvidia Corporation Method, apparatus and article of manufacture for boustrophedonic rasterization
US6396503B1 (en) * 1999-12-31 2002-05-28 Hewlett-Packard Company Dynamic texture loading based on texture tile visibility
US6466226B1 (en) * 2000-01-10 2002-10-15 Intel Corporation Method and apparatus for pixel filtering using shared filter resource between overlay and texture mapping engines
US7483042B1 (en) * 2000-01-13 2009-01-27 Ati International, Srl Video graphics module capable of blending multiple image layers
US6433789B1 (en) * 2000-02-18 2002-08-13 Neomagic Corp. Steaming prefetching texture cache for level of detail maps in a 3D-graphics engine
US6819325B2 (en) * 2000-03-07 2004-11-16 Microsoft Corporation API communications for vertex and pixel shaders
US7159041B2 (en) * 2000-03-07 2007-01-02 Microsoft Corporation Method and system for defining and controlling algorithmic elements in a graphics display system
US6664955B1 (en) * 2000-03-15 2003-12-16 Sun Microsystems, Inc. Graphics system configured to interpolate pixel values
TW459206B (en) * 2000-03-17 2001-10-11 Silicon Integrated Sys Corp Texture mapping cache connection device and method
JP2001273518A (en) * 2000-03-28 2001-10-05 Toshiba Corp Rendering device
WO2001075804A1 (en) * 2000-03-31 2001-10-11 Intel Corporation Tiled graphics architecture
US6819321B1 (en) * 2000-03-31 2004-11-16 Intel Corporation Method and apparatus for processing 2D operations in a tiled graphics architecture
US7055095B1 (en) 2000-04-14 2006-05-30 Picsel Research Limited Systems and methods for digital document processing
US7009626B2 (en) 2000-04-14 2006-03-07 Picsel Technologies Limited Systems and methods for generating visual representations of graphical data and digital document processing
AUPQ691100A0 (en) * 2000-04-14 2000-05-11 Lim, Dr Hong Lip Improvements to 3d graphics
US6781600B2 (en) 2000-04-14 2004-08-24 Picsel Technologies Limited Shape processor
US7576730B2 (en) 2000-04-14 2009-08-18 Picsel (Research) Limited User interface systems and methods for viewing and manipulating digital documents
US6490635B1 (en) * 2000-04-28 2002-12-03 Western Digital Technologies, Inc. Conflict detection for queued command handling in disk drive controller
US6741243B2 (en) 2000-05-01 2004-05-25 Broadcom Corporation Method and system for reducing overflows in a computer graphics system
US6707462B1 (en) * 2000-05-12 2004-03-16 Microsoft Corporation Method and system for implementing graphics control constructs
US7116333B1 (en) * 2000-05-12 2006-10-03 Microsoft Corporation Data retrieval method and system
TW463120B (en) * 2000-05-16 2001-11-11 Silicon Integrated Sys Corp Method for enhancing 3D graphic performance by pre-sorting
US6996596B1 (en) 2000-05-23 2006-02-07 Mips Technologies, Inc. Floating-point processor with operating mode having improved accuracy and high performance
US6670958B1 (en) * 2000-05-26 2003-12-30 Ati International, Srl Method and apparatus for routing data to multiple graphics devices
US6724394B1 (en) * 2000-05-31 2004-04-20 Nvidia Corporation Programmable pixel shading architecture
US6670955B1 (en) * 2000-07-19 2003-12-30 Ati International Srl Method and system for sort independent alpha blending of graphic fragments
US6681224B2 (en) * 2000-07-31 2004-01-20 Fujitsu Limited Method and device for sorting data, and a computer product
US7414635B1 (en) * 2000-08-01 2008-08-19 Ati International Srl Optimized primitive filler
US6963347B1 (en) * 2000-08-04 2005-11-08 Ati International, Srl Vertex data processing with multiple threads of execution
US6714196B2 (en) * 2000-08-18 2004-03-30 Hewlett-Packard Development Company L.P Method and apparatus for tiled polygon traversal
US7002591B1 (en) * 2000-08-23 2006-02-21 Nintendo Co., Ltd. Method and apparatus for interleaved processing of direct and indirect texture coordinates in a graphics system
US6825851B1 (en) 2000-08-23 2004-11-30 Nintendo Co., Ltd. Method and apparatus for environment-mapped bump-mapping in a graphics system
US6980218B1 (en) * 2000-08-23 2005-12-27 Nintendo Co., Ltd. Method and apparatus for efficient generation of texture coordinate displacements for implementing emboss-style bump mapping in a graphics rendering system
US8692844B1 (en) 2000-09-28 2014-04-08 Nvidia Corporation Method and system for efficient antialiased rendering
US6828980B1 (en) * 2000-10-02 2004-12-07 Nvidia Corporation System, method and computer program product for z-texture mapping
US7027072B1 (en) * 2000-10-13 2006-04-11 Silicon Graphics, Inc. Method and system for spatially compositing digital video images with a tile pattern library
US7561155B1 (en) * 2000-10-23 2009-07-14 Evans & Sutherland Computer Corporation Method for reducing transport delay in an image generator
US7136069B1 (en) * 2000-10-31 2006-11-14 Sony Corporation Method and system for texturing
WO2002039389A1 (en) * 2000-11-07 2002-05-16 Holographic Imaging Llc Computer generated hologram display system
US20020080143A1 (en) * 2000-11-08 2002-06-27 Morgan David L. Rendering non-interactive three-dimensional content
WO2002069370A2 (en) 2000-11-12 2002-09-06 Bitboys, Inc. 3-d rendering engine with embedded memory
US7079133B2 (en) * 2000-11-16 2006-07-18 S3 Graphics Co., Ltd. Superscalar 3D graphics engine
US6680739B1 (en) * 2000-11-17 2004-01-20 Hewlett-Packard Development Company, L.P. Systems and methods for compositing graphical data
US6985162B1 (en) * 2000-11-17 2006-01-10 Hewlett-Packard Development Company, L.P. Systems and methods for rendering active stereo graphical data as passive stereo
US6882346B1 (en) * 2000-11-17 2005-04-19 Hewlett-Packard Development Company, L.P. System and method for efficiently rendering graphical data
US6697074B2 (en) * 2000-11-28 2004-02-24 Nintendo Co., Ltd. Graphics system interface
US7358974B2 (en) * 2001-01-29 2008-04-15 Silicon Graphics, Inc. Method and system for minimizing an amount of data needed to test data against subarea boundaries in spatially composited digital video
US7453459B2 (en) * 2001-02-26 2008-11-18 Adobe Systems Incorporated Composite rendering 3-D graphical objects
US6828975B2 (en) 2001-03-01 2004-12-07 Microsoft Corporation Method and system for managing graphics objects in a graphics display system
US7411593B2 (en) * 2001-03-28 2008-08-12 International Business Machines Corporation Image rotation with substantially no aliasing error
US6859209B2 (en) * 2001-05-18 2005-02-22 Sun Microsystems, Inc. Graphics data accumulation for improved multi-layer texture performance
TW512277B (en) * 2001-06-22 2002-12-01 Silicon Integrated Sys Corp Core logic of a computer system and control method of the same
GB2378108B (en) * 2001-07-24 2005-08-17 Imagination Tech Ltd Three dimensional graphics system
US20030030646A1 (en) * 2001-08-10 2003-02-13 Yeh Kwo-Woei Trilinear texture filtering method with proper texel selection
US6943800B2 (en) * 2001-08-13 2005-09-13 Ati Technologies, Inc. Method and apparatus for updating state data
US6744433B1 (en) * 2001-08-31 2004-06-01 Nvidia Corporation System and method for using and collecting information from a plurality of depth layers
US20030043148A1 (en) * 2001-09-06 2003-03-06 Lin-Tien Mei Method for accelerated triangle occlusion culling
US6924820B2 (en) * 2001-09-25 2005-08-02 Sun Microsystems, Inc. Over-evaluating samples during rasterization for improved datapath utilization
US6947053B2 (en) * 2001-09-27 2005-09-20 Intel Corporation Texture engine state variable synchronizer
JP3986497B2 (en) * 2001-10-10 2007-10-03 ソニー・コンピュータ・エンタテインメント・アメリカ・インク Point pushing system and method for drawing polygons in an environment with varying level of detail
AU2002335799A1 (en) 2001-10-10 2003-04-22 Sony Computer Entertainment America Inc. System and method for environment mapping
US7081893B2 (en) * 2001-10-10 2006-07-25 Sony Computer Entertainment America Inc. System and method for point pushing to render polygons in environments with changing levels of detail
US6999076B2 (en) * 2001-10-29 2006-02-14 Ati Technologies, Inc. System, method, and apparatus for early culling
US7081903B2 (en) * 2001-12-12 2006-07-25 Hewlett-Packard Development Company, L.P. Efficient movement of fragment stamp
US6747653B2 (en) * 2001-12-31 2004-06-08 Intel Corporation Efficient object storage for zone rendering
US6765588B2 (en) * 2002-01-08 2004-07-20 3Dlabs, Inc., Ltd. Multisample dithering with shuffle tables
KR100460970B1 (en) * 2002-01-10 2004-12-09 삼성전자주식회사 Data transmitting/receiving system and method thereof
US6812928B2 (en) * 2002-01-30 2004-11-02 Sun Microsystems, Inc. Performance texture mapping by combining requests for image data
US7310103B2 (en) * 2002-03-05 2007-12-18 Sun Microsystems, Inc. Pipelined 2D viewport clip circuit
US7154502B2 (en) * 2002-03-19 2006-12-26 3D Labs, Inc. Ltd. 3D graphics with optional memory write before texturing
US7027056B2 (en) * 2002-05-10 2006-04-11 Nec Electronics (Europe) Gmbh Graphics engine, and display driver IC and display module incorporating the graphics engine
WO2003096378A2 (en) * 2002-05-10 2003-11-20 Nec Electronics Corporation Display driver ic, display module and electrical device incorporating a graphics engine
US7447872B2 (en) * 2002-05-30 2008-11-04 Cisco Technology, Inc. Inter-chip processor control plane communication
US6980209B1 (en) * 2002-06-14 2005-12-27 Nvidia Corporation Method and system for scalable, dataflow-based, programmable processing of graphics data
US7024663B2 (en) * 2002-07-10 2006-04-04 Micron Technology, Inc. Method and system for generating object code to facilitate predictive memory retrieval
US6954204B2 (en) * 2002-07-18 2005-10-11 Nvidia Corporation Programmable graphics system and method using flexible, high-precision data formats
US6809732B2 (en) * 2002-07-18 2004-10-26 Nvidia Corporation Method and apparatus for generation of programmable shader configuration information from state-based control information and program instructions
US6864893B2 (en) * 2002-07-19 2005-03-08 Nvidia Corporation Method and apparatus for modifying depth values using pixel programs
KR100441079B1 (en) * 2002-07-31 2004-07-21 학교법인연세대학교 apparatus and method for antialiasing
US7321623B2 (en) * 2002-10-01 2008-01-22 Avocent Corporation Video compression system
US20060126718A1 (en) * 2002-10-01 2006-06-15 Avocent Corporation Video compression encoder
US9377987B2 (en) * 2002-10-22 2016-06-28 Broadcom Corporation Hardware assisted format change mechanism in a display controller
FR2846122B1 (en) * 2002-10-22 2005-04-15 Eric Piccuezzu METHOD AND DEVICE FOR CONSTRUCTING AND VISUALIZING THE IMAGE OF A COMPUTER MODEL
US20040095348A1 (en) * 2002-11-19 2004-05-20 Bleiweiss Avi I. Shading language interface and method
US7633506B1 (en) * 2002-11-27 2009-12-15 Ati Technologies Ulc Parallel pipeline graphics system
US7317456B1 (en) * 2002-12-02 2008-01-08 Ngrain (Canada) Corporation Method and apparatus for transforming point cloud data to volumetric data
US9446305B2 (en) * 2002-12-10 2016-09-20 Sony Interactive Entertainment America Llc System and method for improving the graphics performance of hosted applications
US8961316B2 (en) * 2002-12-10 2015-02-24 Ol2, Inc. System and method for improving the graphics performance of hosted applications
US9138644B2 (en) 2002-12-10 2015-09-22 Sony Computer Entertainment America Llc System and method for accelerated machine switching
US8851999B2 (en) * 2002-12-10 2014-10-07 Ol2, Inc. System and method for improving the graphics performance of hosted applications
US8840477B2 (en) 2002-12-10 2014-09-23 Ol2, Inc. System and method for improving the graphics performance of hosted applications
US8845434B2 (en) * 2002-12-10 2014-09-30 Ol2, Inc. System and method for improving the graphics performance of hosted applications
US7301537B2 (en) * 2002-12-20 2007-11-27 Telefonaktiebolaget Lm Ericsson (Publ) Graphics processing apparatus, methods and computer program products using minimum-depth occlusion culling and zig-zag traversal
CN100339869C (en) * 2002-12-20 2007-09-26 Lm爱立信电话有限公司 Graphics processing apparatus, methods and computer program products using minimum-depth occlusion culling and zig-zag traversal
US6996665B2 (en) * 2002-12-30 2006-02-07 International Business Machines Corporation Hazard queue for transaction pipeline
US7030884B2 (en) * 2003-02-13 2006-04-18 Hewlett-Packard Development Company, L.P. System and method for resampling texture maps
US7199806B2 (en) * 2003-03-19 2007-04-03 Sun Microsystems, Inc. Rasterization of primitives using parallel edge units
US7190367B2 (en) * 2003-03-25 2007-03-13 Mitsubishi Electric Research Laboratories, Inc. Method, apparatus, and system for rendering using a progressive cache
US7034837B2 (en) * 2003-05-05 2006-04-25 Silicon Graphics, Inc. Method, system, and computer program product for determining a structure of a graphics compositor tree
US7551183B2 (en) * 2003-06-30 2009-06-23 Intel Corporation Clipping and scissoring technique
US7280114B2 (en) * 2003-06-30 2007-10-09 Intel Corporation Line stipple pattern emulation through texture mapping
US7113192B2 (en) * 2003-06-30 2006-09-26 Intel Corporation Large 1D texture map representation with a 2D texture map
US20050017982A1 (en) * 2003-07-23 2005-01-27 Kane Francis James Dynamic imposter generation with MIP map anti-aliasing
US20050030309A1 (en) * 2003-07-25 2005-02-10 David Gettman Information display
US7467356B2 (en) * 2003-07-25 2008-12-16 Three-B International Limited Graphical user interface for 3d virtual display browser using virtual display windows
GB2404316B (en) * 2003-07-25 2005-11-30 Imagination Tech Ltd Three-Dimensional computer graphics system
US20050021472A1 (en) * 2003-07-25 2005-01-27 David Gettman Transactions in virtual property
GB2404546B (en) * 2003-07-25 2005-12-14 Purple Interactive Ltd A method of organising and displaying material content on a display to a viewer
US7002592B2 (en) 2003-07-30 2006-02-21 Hewlett-Packard Development Company, L.P. Graphical display system and method for applying parametric and non-parametric texture maps to graphical objects
US7009620B2 (en) * 2003-07-30 2006-03-07 Hewlett-Packard Development Company, L.P. System and method for combining parametric texture maps
US7006103B2 (en) * 2003-07-30 2006-02-28 Hewlett-Packard Development Company, L.P. System and method for editing parametric texture maps
US7623730B2 (en) * 2003-07-30 2009-11-24 Hewlett-Packard Development Company, L.P. System and method that compensate for rotations of textures defined by parametric texture maps
US9560371B2 (en) 2003-07-30 2017-01-31 Avocent Corporation Video compression system
US7032088B2 (en) * 2003-08-07 2006-04-18 Siemens Corporate Research, Inc. Advanced memory management architecture for large data volumes
GB0319697D0 (en) * 2003-08-21 2003-09-24 Falanx Microsystems As Method of and apparatus for differential encoding and decoding
US7218317B2 (en) * 2003-08-25 2007-05-15 Via Technologies, Inc. Mechanism for reducing Z buffer traffic in three-dimensional graphics processing
US7030887B2 (en) * 2003-09-12 2006-04-18 Microsoft Corporation Methods and systems for transparent depth sorting
US8788996B2 (en) 2003-09-15 2014-07-22 Nvidia Corporation System and method for configuring semiconductor functional circuits
US8775997B2 (en) * 2003-09-15 2014-07-08 Nvidia Corporation System and method for testing and configuring semiconductor functional circuits
US8732644B1 (en) 2003-09-15 2014-05-20 Nvidia Corporation Micro electro mechanical switch system and method for testing and configuring semiconductor functional circuits
JP4183082B2 (en) * 2003-09-26 2008-11-19 シャープ株式会社 3D image drawing apparatus and 3D image drawing method
KR100546383B1 (en) * 2003-09-29 2006-01-26 삼성전자주식회사 3D graphics rendering engine for processing an invisible fragment and method thereof
US7239322B2 (en) 2003-09-29 2007-07-03 Ati Technologies Inc Multi-thread graphic processing system
US8133115B2 (en) 2003-10-22 2012-03-13 Sony Computer Entertainment America Llc System and method for recording and displaying a graphical path in a video game
US8174531B1 (en) 2003-10-29 2012-05-08 Nvidia Corporation Programmable graphics processor for multithreaded execution of programs
US7139003B1 (en) * 2003-12-15 2006-11-21 Nvidia Corporation Methods of processing graphics data including reading and writing buffers
US8860737B2 (en) * 2003-10-29 2014-10-14 Nvidia Corporation Programmable graphics processor for multithreaded execution of programs
US7202872B2 (en) * 2003-10-29 2007-04-10 Via Technologies, Inc. Apparatus for compressing data in a bit stream or bit pattern
US7836276B2 (en) 2005-12-02 2010-11-16 Nvidia Corporation System and method for processing thread groups in a SIMD architecture
US7245302B1 (en) * 2003-10-30 2007-07-17 Nvidia Corporation Processing high numbers of independent textures in a 3-D graphics pipeline
US8823718B2 (en) * 2003-11-14 2014-09-02 Microsoft Corporation Systems and methods for downloading algorithmic elements to a coprocessor and corresponding techniques
US6900818B1 (en) * 2003-11-18 2005-05-31 Silicon Graphics, Inc. Primitive culling apparatus and method
US7158132B1 (en) * 2003-11-18 2007-01-02 Silicon Graphics, Inc. Method and apparatus for processing primitive data for potential display on a display device
US20090027383A1 (en) * 2003-11-19 2009-01-29 Lucid Information Technology, Ltd. Computing system parallelizing the operation of multiple graphics processing pipelines (GPPLs) and supporting depth-less based image recomposition
US6897871B1 (en) 2003-11-20 2005-05-24 Ati Technologies Inc. Graphics processing architecture employing a unified shader
US20050122338A1 (en) * 2003-12-05 2005-06-09 Michael Hong Apparatus and method for rendering graphics primitives using a multi-pass rendering approach
EP1542167A1 (en) * 2003-12-09 2005-06-15 Koninklijke Philips Electronics N.V. Computer graphics processor and method for rendering 3D scenes on a 3D image display screen
US7248261B1 (en) * 2003-12-15 2007-07-24 Nvidia Corporation Method and apparatus to accelerate rendering of shadow effects for computer-generated images
US7053904B1 (en) * 2003-12-15 2006-05-30 Nvidia Corporation Position conflict detection and avoidance in a programmable graphics processor
US8711161B1 (en) 2003-12-18 2014-04-29 Nvidia Corporation Functional component compensation reconfiguration system and method
JP4064339B2 (en) * 2003-12-19 2008-03-19 株式会社東芝 Drawing processing apparatus, drawing processing method, and drawing processing program
US7450120B1 (en) * 2003-12-19 2008-11-11 Nvidia Corporation Apparatus, system, and method for Z-culling
US20050134588A1 (en) * 2003-12-22 2005-06-23 Hybrid Graphics, Ltd. Method and apparatus for image processing
US8269769B1 (en) 2003-12-22 2012-09-18 Nvidia Corporation Occlusion prediction compression system and method
US8390619B1 (en) 2003-12-22 2013-03-05 Nvidia Corporation Occlusion prediction graphics processing system and method
US7995056B1 (en) * 2003-12-22 2011-08-09 Nvidia Corporation Culling data selection system and method
US8854364B1 (en) 2003-12-22 2014-10-07 Nvidia Corporation Tight depth range occlusion prediction system and method
US9098943B1 (en) * 2003-12-31 2015-08-04 Ziilabs Inc., Ltd. Multiple simultaneous bin sizes
US6975325B2 (en) * 2004-01-23 2005-12-13 Ati Technologies Inc. Method and apparatus for graphics processing using state and shader management
US7656417B2 (en) * 2004-02-12 2010-02-02 Ati Technologies Ulc Appearance determination using fragment reduction
US20050195186A1 (en) * 2004-03-02 2005-09-08 Ati Technologies Inc. Method and apparatus for object based visibility culling
US20050206648A1 (en) * 2004-03-16 2005-09-22 Perry Ronald N Pipeline and cache for processing data progressively
US7030878B2 (en) * 2004-03-19 2006-04-18 Via Technologies, Inc. Method and apparatus for generating a shadow effect using shadow volumes
US8860722B2 (en) * 2004-05-14 2014-10-14 Nvidia Corporation Early Z scoreboard tracking system and method
US8687010B1 (en) 2004-05-14 2014-04-01 Nvidia Corporation Arbitrary size texture palettes for use in graphics systems
US8411105B1 (en) 2004-05-14 2013-04-02 Nvidia Corporation Method and system for computing pixel parameters
US8711155B2 (en) * 2004-05-14 2014-04-29 Nvidia Corporation Early kill removal graphics processing system and method
US8736620B2 (en) 2004-05-14 2014-05-27 Nvidia Corporation Kill bit graphics processing system and method
US7079156B1 (en) * 2004-05-14 2006-07-18 Nvidia Corporation Method and system for implementing multiple high precision and low precision interpolators for a graphics pipeline
US8736628B1 (en) 2004-05-14 2014-05-27 Nvidia Corporation Single thread graphics processing system and method
US8743142B1 (en) 2004-05-14 2014-06-03 Nvidia Corporation Unified data fetch graphics processing system and method
US20060007234A1 (en) * 2004-05-14 2006-01-12 Hutchins Edward A Coincident graphics pixel scoreboard tracking system and method
US8416242B1 (en) 2004-05-14 2013-04-09 Nvidia Corporation Method and system for interpolating level-of-detail in graphics processors
US8427490B1 (en) * 2004-05-14 2013-04-23 Nvidia Corporation Validating a graphics pipeline using pre-determined schedules
US8432394B1 (en) 2004-05-14 2013-04-30 Nvidia Corporation Method and system for implementing clamped z value interpolation in a raster stage of a graphics pipeline
JP2008502064A (en) * 2004-06-08 2008-01-24 スリー−ビィ・インターナショナル・リミテッド Display image texture
JP4199159B2 (en) * 2004-06-09 2008-12-17 株式会社東芝 Drawing processing apparatus, drawing processing method, and drawing processing program
US7457461B2 (en) 2004-06-25 2008-11-25 Avocent Corporation Video compression noise immunity
US7505036B1 (en) * 2004-07-30 2009-03-17 3Dlabs Inc. Ltd. Order-independent 3D graphics binning architecture
US7277098B2 (en) * 2004-08-23 2007-10-02 Via Technologies, Inc. Apparatus and method of an improved stencil shadow volume operation
WO2006028093A1 (en) * 2004-09-06 2006-03-16 Matsushita Electric Industrial Co., Ltd. Video generation device and video generation method
US7545997B2 (en) * 2004-09-10 2009-06-09 Xerox Corporation Simulated high resolution using binary sub-sampling
US8723231B1 (en) 2004-09-15 2014-05-13 Nvidia Corporation Semiconductor die micro electro-mechanical switch management system and method
US7205997B1 (en) * 2004-09-28 2007-04-17 Nvidia Corporation Transparent video capture from primary video surface
US8624906B2 (en) * 2004-09-29 2014-01-07 Nvidia Corporation Method and system for non stalling pipeline instruction fetching from memory
US7233334B1 (en) * 2004-09-29 2007-06-19 Nvidia Corporation Storage buffers with reference counters to improve utilization
US8711156B1 (en) 2004-09-30 2014-04-29 Nvidia Corporation Method and system for remapping processing elements in a pipeline of a graphics processing unit
US20060071933A1 (en) 2004-10-06 2006-04-06 Sony Computer Entertainment Inc. Application binary interface for multi-pass shaders
US8424012B1 (en) 2004-11-15 2013-04-16 Nvidia Corporation Context switching on a video processor having a scalar execution unit and a vector execution unit
JP4692956B2 (en) * 2004-11-22 2011-06-01 株式会社ソニー・コンピュータエンタテインメント Drawing processing apparatus and drawing processing method
US20060187229A1 (en) * 2004-12-08 2006-08-24 Xgi Technology Inc. (Cayman) Page based rendering in 3D graphics system
US7623132B1 (en) * 2004-12-20 2009-11-24 Nvidia Corporation Programmable shader having register forwarding for reduced register-file bandwidth consumption
NO20045586L (en) * 2004-12-21 2006-06-22 Sinvent As Device and method for determining cutting lines
CN101849227A (en) 2005-01-25 2010-09-29 透明信息技术有限公司 Graphics processing and display system employing multiple graphics cores on a silicon chip of monolithic construction
US7312801B2 (en) * 2005-02-25 2007-12-25 Microsoft Corporation Hardware accelerated blend modes
US7242169B2 (en) * 2005-03-01 2007-07-10 Apple Inc. Method and apparatus for voltage compensation for parasitic impedance
US8089486B2 (en) * 2005-03-21 2012-01-03 Qualcomm Incorporated Tiled prefetched and cached depth buffer
US9363481B2 (en) * 2005-04-22 2016-06-07 Microsoft Technology Licensing, Llc Protected media pipeline
US7349066B2 (en) * 2005-05-05 2008-03-25 Asml Masktools B.V. Apparatus, method and computer program product for performing a model based optical proximity correction factoring neighbor influence
US20060257827A1 (en) * 2005-05-12 2006-11-16 Blinktwice, Llc Method and apparatus to individualize content in an augmentative and alternative communication device
US8427496B1 (en) 2005-05-13 2013-04-23 Nvidia Corporation Method and system for implementing compression across a graphics bus interconnect
US7478289B1 (en) * 2005-06-03 2009-01-13 Nvidia Corporation System and method for improving the yield of integrated circuits containing memory
US7636126B2 (en) 2005-06-22 2009-12-22 Sony Computer Entertainment Inc. Delay matching in audio/video systems
US9298311B2 (en) 2005-06-23 2016-03-29 Apple Inc. Trackpad sensitivity compensation
KR100913173B1 (en) * 2005-07-05 2009-08-19 삼성모바일디스플레이주식회사 3 dimension graphic processor and autostereoscopic display device using the same
KR100932977B1 (en) * 2005-07-05 2009-12-21 삼성모바일디스플레이주식회사 Stereoscopic video display
US20070019740A1 (en) * 2005-07-25 2007-01-25 Texas Instruments Incorporated Video coding for 3d rendering
US8279221B2 (en) * 2005-08-05 2012-10-02 Samsung Display Co., Ltd. 3D graphics processor and autostereoscopic display device using the same
US7616202B1 (en) * 2005-08-12 2009-11-10 Nvidia Corporation Compaction of z-only samples
US20070055879A1 (en) * 2005-08-16 2007-03-08 Jianjun Luo System and method for high performance public key encryption
US7492373B2 (en) * 2005-08-22 2009-02-17 Intel Corporation Reducing memory bandwidth to texture samplers via re-interpolation of texture coordinates
US7551177B2 (en) * 2005-08-31 2009-06-23 Ati Technologies, Inc. Methods and apparatus for retrieving and combining samples of graphics information
US7782334B1 (en) * 2005-09-13 2010-08-24 Nvidia Corporation Pixel shader-based data array resizing
US7433191B2 (en) * 2005-09-30 2008-10-07 Apple Inc. Thermal contact arrangement
US8144149B2 (en) * 2005-10-14 2012-03-27 Via Technologies, Inc. System and method for dynamically load balancing multiple shader stages in a shared pool of processing units
US9092170B1 (en) 2005-10-18 2015-07-28 Nvidia Corporation Method and system for implementing fragment operation processing across a graphics bus interconnect
US7432934B2 (en) * 2005-10-19 2008-10-07 Hewlett-Packard Development Company, L.P. System and method for display sharing
GB0524804D0 (en) 2005-12-05 2006-01-11 Falanx Microsystems As Method of and apparatus for processing graphics
GB0523084D0 (en) * 2005-11-11 2005-12-21 Cancer Res Inst Royal Imaging method and apparatus
US7598711B2 (en) * 2005-11-23 2009-10-06 Apple Inc. Power source switchover apparatus and method
US7623127B2 (en) * 2005-11-29 2009-11-24 Siemens Medical Solutions Usa, Inc. Method and apparatus for discrete mesh filleting and rounding through ball pivoting
WO2007064280A1 (en) * 2005-12-01 2007-06-07 Swiftfoot Graphics Ab Computer graphics processor and method for rendering a three-dimensional image on a display screen
US7916146B1 (en) * 2005-12-02 2011-03-29 Nvidia Corporation Halt context switching method and system
US7616218B1 (en) 2005-12-05 2009-11-10 Nvidia Corporation Apparatus, system, and method for clipping graphics primitives
US7292254B1 (en) 2005-12-05 2007-11-06 Nvidia Corporation Apparatus, system, and method for clipping graphics primitives with reduced sensitivity to vertex ordering
US7439988B1 (en) 2005-12-05 2008-10-21 Nvidia Corporation Apparatus, system, and method for clipping graphics primitives with respect to a clipping plane
US20080273031A1 (en) * 2005-12-08 2008-11-06 Xgi Technology Inc. (Cayman) Page based rendering in 3D graphics system
US7434032B1 (en) 2005-12-13 2008-10-07 Nvidia Corporation Tracking register usage during multithreaded processing using a scoreboard having separate memory regions and storing sequential register size indicators
US8698811B1 (en) 2005-12-15 2014-04-15 Nvidia Corporation Nested boustrophedonic patterns for rasterization
US7420572B1 (en) 2005-12-19 2008-09-02 Nvidia Corporation Apparatus, system, and method for clipping graphics primitives with accelerated context switching
US9117309B1 (en) 2005-12-19 2015-08-25 Nvidia Corporation Method and system for rendering polygons with a bounding box in a graphics processor unit
US8390645B1 (en) 2005-12-19 2013-03-05 Nvidia Corporation Method and system for rendering connecting antialiased line segments
US7714877B1 (en) 2005-12-19 2010-05-11 Nvidia Corporation Apparatus, system, and method for determining clipping distances
US8817035B2 (en) * 2005-12-21 2014-08-26 Nvidia Corporation Texture pipeline context switch
US7564456B1 (en) * 2006-01-13 2009-07-21 Nvidia Corporation Apparatus and method for raster tile coalescing
US8718147B2 (en) * 2006-02-17 2014-05-06 Avocent Huntsville Corporation Video compression algorithm
US7555570B2 (en) 2006-02-17 2009-06-30 Avocent Huntsville Corporation Device and method for configuring a target device
US8125486B2 (en) * 2006-02-23 2012-02-28 Los Alamos National Security, Llc Combining multi-layered bitmap files using network specific hardware
US8006236B1 (en) * 2006-02-24 2011-08-23 Nvidia Corporation System and method for compiling high-level primitive programs into primitive program micro-code
US7825933B1 (en) 2006-02-24 2010-11-02 Nvidia Corporation Managing primitive program vertex attributes as per-attribute arrays
US8171461B1 (en) 2006-02-24 2012-05-01 Nvidia Coporation Primitive program compilation for flat attributes with provoking vertex independence
TWI319166B (en) * 2006-03-06 2010-01-01 Via Tech Inc Method and related apparatus for graphic processing
KR20070092499A (en) * 2006-03-10 2007-09-13 삼성전자주식회사 Method and apparatus for processing 3 dimensional data
CA2707680A1 (en) 2006-03-14 2007-09-20 Transgaming Inc. General purpose software parallel task engine
US7782961B2 (en) * 2006-04-28 2010-08-24 Avocent Corporation DVC delta commands
US7941724B2 (en) * 2006-05-01 2011-05-10 Nokia Siemens Networks Oy Embedded retransmission scheme with cross-packet coding
US7778978B2 (en) * 2006-05-01 2010-08-17 Nokia Siemens Networks Oy Decoder for a system with H-ARQ with cross-packet coding
US7880746B2 (en) 2006-05-04 2011-02-01 Sony Computer Entertainment Inc. Bandwidth management through lighting control of a user environment via a display device
US7965859B2 (en) 2006-05-04 2011-06-21 Sony Computer Entertainment Inc. Lighting control of a user environment via a display device
US7353691B2 (en) * 2006-06-02 2008-04-08 General Electric Company High performance generator stator leak monitoring system
US7944443B1 (en) * 2006-06-09 2011-05-17 Pixar Sliding patch deformer
CN101145239A (en) * 2006-06-20 2008-03-19 威盛电子股份有限公司 Graphics processing unit and method for border color handling
US7965296B2 (en) * 2006-06-20 2011-06-21 Via Technologies, Inc. Systems and methods for storing texture map data
US7898551B2 (en) * 2006-06-20 2011-03-01 Via Technologies, Inc. Systems and methods for performing a bank swizzle operation to reduce bank collisions
US7880745B2 (en) * 2006-06-20 2011-02-01 Via Technologies, Inc. Systems and methods for border color handling in a graphics processing unit
US8928676B2 (en) * 2006-06-23 2015-01-06 Nvidia Corporation Method for parallel fine rasterization in a raster stage of a graphics pipeline
US7652672B2 (en) * 2006-06-29 2010-01-26 Mediatek, Inc. Systems and methods for texture management
US8284204B2 (en) * 2006-06-30 2012-10-09 Nokia Corporation Apparatus, method and a computer program product for providing a unified graphics pipeline for stereoscopic rendering
KR100762811B1 (en) * 2006-07-20 2007-10-02 삼성전자주식회사 Method and system for tile binning using half-plane edge function
US8633927B2 (en) 2006-07-25 2014-01-21 Nvidia Corporation Re-render acceleration of frame with lighting change
US7952588B2 (en) * 2006-08-03 2011-05-31 Qualcomm Incorporated Graphics processing unit with extended vertex cache
US8009172B2 (en) * 2006-08-03 2011-08-30 Qualcomm Incorporated Graphics processing unit with shared arithmetic logic unit
US8237739B2 (en) 2006-09-12 2012-08-07 Qualcomm Incorporated Method and device for performing user-defined clipping in object space
KR101257849B1 (en) * 2006-09-29 2013-04-30 삼성전자주식회사 Method and Apparatus for rendering 3D graphic objects, and Method and Apparatus to minimize rendering objects for the same
GB2442266B (en) * 2006-09-29 2008-10-22 Imagination Tech Ltd Improvements in memory management for systems for generating 3-dimensional computer images
US7605825B1 (en) * 2006-10-24 2009-10-20 Adobe Systems, Incorporated Fast zoom-adaptable anti-aliasing of lines using a graphics processing unit
US8537168B1 (en) 2006-11-02 2013-09-17 Nvidia Corporation Method and system for deferred coverage mask generation in a raster stage
US8427487B1 (en) 2006-11-02 2013-04-23 Nvidia Corporation Multiple tile output using interface compression in a raster stage
US8482567B1 (en) 2006-11-03 2013-07-09 Nvidia Corporation Line rasterization techniques
US7746352B2 (en) * 2006-11-03 2010-06-29 Nvidia Corporation Deferred page faulting in virtual memory based sparse texture representations
KR100803220B1 (en) * 2006-11-20 2008-02-14 삼성전자주식회사 Method and apparatus for rendering of 3d graphics of multi-pipeline
KR100818286B1 (en) * 2006-11-23 2008-04-01 삼성전자주식회사 Method and apparatus for rendering 3 dimensional graphics data considering fog effect
US9965886B2 (en) * 2006-12-04 2018-05-08 Arm Norway As Method of and apparatus for processing graphics
US8212835B1 (en) * 2006-12-14 2012-07-03 Nvidia Corporation Systems and methods for smooth transitions to bi-cubic magnification
US8547395B1 (en) 2006-12-20 2013-10-01 Nvidia Corporation Writing coverage information to a framebuffer in a computer graphics system
KR100848687B1 (en) * 2007-01-05 2008-07-28 삼성전자주식회사 3-dimension graphic processing apparatus and operating method thereof
US7940261B2 (en) * 2007-01-10 2011-05-10 Qualcomm Incorporated Automatic load balancing of a 3D graphics pipeline
US7791605B2 (en) * 2007-05-01 2010-09-07 Qualcomm Incorporated Universal rasterization of graphic primitives
US7733354B1 (en) * 2007-05-31 2010-06-08 Adobe Systems Incorporated Anti-aliased rendering
US7948500B2 (en) * 2007-06-07 2011-05-24 Nvidia Corporation Extrapolation of nonresident mipmap data using resident mipmap data
US7944453B1 (en) * 2007-06-07 2011-05-17 Nvidia Corporation Extrapolation texture filtering for nonresident mipmaps
FR2917211A1 (en) * 2007-06-08 2008-12-12 St Microelectronics Sa METHOD AND DEVICE FOR GENERATING GRAPHICS
KR101387366B1 (en) * 2007-06-27 2014-04-21 삼성전자주식회사 Multiview autostereoscopic display device and multiview autostereoscopic display method
US8683126B2 (en) 2007-07-30 2014-03-25 Nvidia Corporation Optimal use of buffer space by a storage controller which writes retrieved data directly to a memory
US8441497B1 (en) 2007-08-07 2013-05-14 Nvidia Corporation Interpolation of vertex attributes in a graphics processor
US8004522B1 (en) * 2007-08-07 2011-08-23 Nvidia Corporation Using coverage information in computer graphics
US8659601B1 (en) 2007-08-15 2014-02-25 Nvidia Corporation Program sequencer for generating indeterminant length shader programs for a graphics processor
US8411096B1 (en) 2007-08-15 2013-04-02 Nvidia Corporation Shader program instruction fetch
US8698819B1 (en) 2007-08-15 2014-04-15 Nvidia Corporation Software assisted shader merging
US8564598B2 (en) * 2007-08-15 2013-10-22 Nvidia Corporation Parallelogram unified primitive description for rasterization
US8325203B1 (en) 2007-08-15 2012-12-04 Nvidia Corporation Optimal caching for virtual coverage antialiasing
US9183607B1 (en) 2007-08-15 2015-11-10 Nvidia Corporation Scoreboard cache coherence in a graphics pipeline
US9024957B1 (en) 2007-08-15 2015-05-05 Nvidia Corporation Address independent shader program loading
US8201102B2 (en) * 2007-09-04 2012-06-12 Apple Inc. Opaque views for graphical user interfaces
US8996846B2 (en) 2007-09-27 2015-03-31 Nvidia Corporation System, method and computer program product for performing a scan operation
US8289319B2 (en) * 2007-10-08 2012-10-16 Ati Technologies Ulc Apparatus and method for processing pixel depth information
JP2009099098A (en) * 2007-10-19 2009-05-07 Toshiba Corp Computer graphics drawing device and drawing method
US8724483B2 (en) 2007-10-22 2014-05-13 Nvidia Corporation Loopback configuration for bi-directional interfaces
US8638341B2 (en) * 2007-10-23 2014-01-28 Qualcomm Incorporated Antialiasing of two-dimensional vector images
US8264484B1 (en) 2007-10-29 2012-09-11 Nvidia Corporation System, method, and computer program product for organizing a plurality of rays utilizing a bounding volume
US8284188B1 (en) 2007-10-29 2012-10-09 Nvidia Corporation Ray tracing system, method, and computer program product for simultaneously traversing a hierarchy of rays and a hierarchy of objects
US8065288B1 (en) 2007-11-09 2011-11-22 Nvidia Corporation System, method, and computer program product for testing a query against multiple sets of objects utilizing a single instruction multiple data (SIMD) processing architecture
US8661226B2 (en) * 2007-11-15 2014-02-25 Nvidia Corporation System, method, and computer program product for performing a scan operation on a sequence of single-bit values using a parallel processor architecture
US8243083B1 (en) 2007-12-04 2012-08-14 Nvidia Corporation System, method, and computer program product for converting a scan algorithm to a segmented scan algorithm in an operator-independent manner
US8773422B1 (en) 2007-12-04 2014-07-08 Nvidia Corporation System, method, and computer program product for grouping linearly ordered primitives
US8878849B2 (en) * 2007-12-14 2014-11-04 Nvidia Corporation Horizon split ambient occlusion
US8780123B2 (en) * 2007-12-17 2014-07-15 Nvidia Corporation Interrupt handling techniques in the rasterizer of a GPU
US9064333B2 (en) 2007-12-17 2015-06-23 Nvidia Corporation Interrupt handling techniques in the rasterizer of a GPU
US8933946B2 (en) * 2007-12-31 2015-01-13 Intel Corporation Mechanism for effectively handling texture sampling
WO2009094036A1 (en) * 2008-01-25 2009-07-30 Hewlett-Packard Development Company, L.P. Coding mode selection for block-based encoding
US8358314B2 (en) * 2008-02-08 2013-01-22 Apple Inc. Method for reducing framebuffer memory accesses
US8134551B2 (en) * 2008-02-29 2012-03-13 Autodesk, Inc. Frontend for universal rendering framework
US9471996B2 (en) * 2008-02-29 2016-10-18 Autodesk, Inc. Method for creating graphical materials for universal rendering framework
US8068120B2 (en) * 2008-03-07 2011-11-29 Via Technologies, Inc. Guard band clipping systems and methods
US8302078B2 (en) * 2008-03-10 2012-10-30 The Boeing Company Lazy evaluation of geometric definitions of objects within procedural programming environments
GB2458488C (en) * 2008-03-19 2018-09-12 Imagination Tech Ltd Untransformed display lists in a tile based rendering system
US7984317B2 (en) * 2008-03-24 2011-07-19 Apple Inc. Hardware-based power management of functional blocks
US8212806B2 (en) * 2008-04-08 2012-07-03 Autodesk, Inc. File format extensibility for universal rendering framework
US8923385B2 (en) 2008-05-01 2014-12-30 Nvidia Corporation Rewind-enabled hardware encoder
US8681861B2 (en) 2008-05-01 2014-03-25 Nvidia Corporation Multistandard hardware video encoder
US8650364B2 (en) * 2008-05-28 2014-02-11 Vixs Systems, Inc. Processing system with linked-list based prefetch buffer and methods for use therewith
US8502832B2 (en) * 2008-05-30 2013-08-06 Advanced Micro Devices, Inc. Floating point texture filtering using unsigned linear interpolators and block normalizations
US9093040B2 (en) * 2008-05-30 2015-07-28 Advanced Micro Devices, Inc. Redundancy method and apparatus for shader column repair
KR101427408B1 (en) * 2008-05-30 2014-08-07 어드밴스드 마이크로 디바이시즈, 인코포레이티드 Scalable and unified compute system
US20110040771A1 (en) * 2008-06-18 2011-02-17 Petascan Ltd. Distributed hardware-based data querying
WO2010002070A1 (en) * 2008-06-30 2010-01-07 Korea Institute Of Oriental Medicine Method for grouping 3d models to classify constitution
US8667404B2 (en) * 2008-08-06 2014-03-04 Autodesk, Inc. Predictive material editor
JP5658430B2 (en) * 2008-08-15 2015-01-28 パナソニックIpマネジメント株式会社 Image processing device
US9569875B1 (en) * 2008-08-21 2017-02-14 Pixar Ordered list management
US20100053205A1 (en) * 2008-09-03 2010-03-04 Debra Brandwein Method, apparatus, and system for displaying graphics using html elements
US8310494B2 (en) * 2008-09-30 2012-11-13 Apple Inc. Method for reducing graphics rendering failures
US8228337B1 (en) 2008-10-03 2012-07-24 Nvidia Corporation System and method for temporal load balancing across GPUs
US8427474B1 (en) * 2008-10-03 2013-04-23 Nvidia Corporation System and method for temporal load balancing across GPUs
US9336624B2 (en) * 2008-10-07 2016-05-10 Mitsubishi Electric Research Laboratories, Inc. Method and system for rendering 3D distance fields
CA2740139C (en) 2008-10-10 2014-05-13 Lg Electronics Inc. Reception system and data processing method
US8560957B2 (en) * 2008-10-13 2013-10-15 Autodesk, Inc. Data-driven interface for managing materials
US8601398B2 (en) * 2008-10-13 2013-12-03 Autodesk, Inc. Data-driven interface for managing materials
US9342901B2 (en) 2008-10-27 2016-05-17 Autodesk, Inc. Material data processing pipeline
US8584084B2 (en) * 2008-11-12 2013-11-12 Autodesk, Inc. System for library content creation
US8291218B2 (en) * 2008-12-02 2012-10-16 International Business Machines Corporation Creating and using secure communications channels for virtual universes
US8489851B2 (en) 2008-12-11 2013-07-16 Nvidia Corporation Processing of read requests in a memory controller using pre-fetch mechanism
US8321492B1 (en) 2008-12-11 2012-11-27 Nvidia Corporation System, method, and computer program product for converting a reduction algorithm to a segmented reduction algorithm
US8325182B2 (en) * 2008-12-31 2012-12-04 Intel Corporation Methods and systems to selectively batch-cull graphics primitives in response to sample cull results
GB0900700D0 (en) * 2009-01-15 2009-03-04 Advanced Risc Mach Ltd Methods of and apparatus for processing graphics
KR101623020B1 (en) * 2009-02-01 2016-05-20 엘지전자 주식회사 Broadcast receiver and 3d video data processing method
US9256514B2 (en) 2009-02-19 2016-02-09 Nvidia Corporation Debugging and perfomance analysis of applications
US8095560B2 (en) * 2009-02-26 2012-01-10 Yahoo! Inc. Edge attribute aggregation in a directed graph
US9375635B2 (en) * 2009-03-23 2016-06-28 Sony Interactive Entertainment America Llc System and method for improving the graphics performance of hosted applications
US10525344B2 (en) * 2009-03-23 2020-01-07 Sony Interactive Entertainment America Llc System and method for improving the graphics performance of hosted applications
KR20100108697A (en) * 2009-03-30 2010-10-08 삼성전자주식회사 Semiconductor memory device having swap function for dq pads
US20110032259A1 (en) * 2009-06-09 2011-02-10 Intromedic Co., Ltd. Method of displaying images obtained from an in-vivo imaging device and apparatus using same
US9082216B2 (en) * 2009-07-01 2015-07-14 Disney Enterprises, Inc. System and method for filter kernel interpolation for seamless mipmap filtering
US8564616B1 (en) 2009-07-17 2013-10-22 Nvidia Corporation Cull before vertex attribute fetch and vertex lighting
US8542247B1 (en) 2009-07-17 2013-09-24 Nvidia Corporation Cull before vertex attribute fetch and vertex lighting
US20110025700A1 (en) * 2009-07-30 2011-02-03 Lee Victor W Using a Texture Unit for General Purpose Computing
US20110043518A1 (en) * 2009-08-21 2011-02-24 Nicolas Galoppo Von Borries Techniques to store and retrieve image data
US9300969B2 (en) 2009-09-09 2016-03-29 Apple Inc. Video storage
US20110063305A1 (en) * 2009-09-16 2011-03-17 Nvidia Corporation Co-processing techniques on heterogeneous graphics processing units
US9058672B2 (en) * 2009-10-06 2015-06-16 Nvidia Corporation Using a pixel offset for evaluating a plane equation
JP5590849B2 (en) * 2009-10-08 2014-09-17 キヤノン株式会社 Data processing apparatus including parallel processing circuit having a plurality of processing modules, its control apparatus, its control method, and program
US8976195B1 (en) 2009-10-14 2015-03-10 Nvidia Corporation Generating clip state for a batch of vertices
US8384736B1 (en) * 2009-10-14 2013-02-26 Nvidia Corporation Generating clip state for a batch of vertices
CN102640457B (en) 2009-11-04 2015-01-21 新泽西理工学院 Differential frame based scheduling for input queued switches
JP2011128713A (en) * 2009-12-15 2011-06-30 Toshiba Corp Apparatus and program for processing image
US9530189B2 (en) 2009-12-31 2016-12-27 Nvidia Corporation Alternate reduction ratios and threshold mechanisms for framebuffer compression
US8963797B2 (en) * 2010-01-06 2015-02-24 Apple Inc. Display driving architectures
US9378612B2 (en) * 2010-01-08 2016-06-28 Bally Gaming, Inc. Morphing geometric structures of wagering game objects
US9331869B2 (en) 2010-03-04 2016-05-03 Nvidia Corporation Input/output request packet handling techniques by a device specific kernel mode driver
US8970608B2 (en) * 2010-04-05 2015-03-03 Nvidia Corporation State objects for specifying dynamic state
US8773448B2 (en) * 2010-04-09 2014-07-08 Intel Corporation List texture
JP5143856B2 (en) * 2010-04-16 2013-02-13 株式会社ソニー・コンピュータエンタテインメント 3D image display device and 3D image display method
US10786736B2 (en) 2010-05-11 2020-09-29 Sony Interactive Entertainment LLC Placement of user information in a game space
US8593466B2 (en) * 2010-06-08 2013-11-26 Intel Corporation Tile rendering for image processing
WO2011161723A1 (en) * 2010-06-24 2011-12-29 富士通株式会社 Drawing device and drawing method
US9053562B1 (en) 2010-06-24 2015-06-09 Gregory S. Rabin Two dimensional to three dimensional moving image converter
IT1401731B1 (en) * 2010-06-28 2013-08-02 Sisvel Technology Srl METHOD FOR 2D-COMPATIBLE DECODING OF STEREOSCOPIC VIDEO FLOWS
JP5735227B2 (en) * 2010-07-16 2015-06-17 ルネサスエレクトロニクス株式会社 Image conversion apparatus and image conversion system
US20130300740A1 (en) * 2010-09-13 2013-11-14 Alt Software (Us) Llc System and Method for Displaying Data Having Spatial Coordinates
KR101719485B1 (en) * 2010-09-20 2017-03-27 삼성전자주식회사 Apparatus and method for early fragment discarding in graphic processing unit
KR101682650B1 (en) * 2010-09-24 2016-12-21 삼성전자주식회사 Apparatus and method for back-face culling using frame coherence
US8593475B2 (en) 2010-10-13 2013-11-26 Qualcomm Incorporated Systems and methods for dynamic procedural texture generation management
US9171350B2 (en) 2010-10-28 2015-10-27 Nvidia Corporation Adaptive resolution DGPU rendering to provide constant framerate with free IGPU scale up
US9971551B2 (en) * 2010-11-01 2018-05-15 Electronics For Imaging, Inc. Previsualization for large format print jobs
JP5274717B2 (en) * 2010-11-18 2013-08-28 三菱電機株式会社 3D image display apparatus and 3D image display program
US8405668B2 (en) * 2010-11-19 2013-03-26 Apple Inc. Streaming translation in display pipe
US8503753B2 (en) * 2010-12-02 2013-08-06 Kabushiki Kaisha Toshiba System and method for triangular interpolation in image reconstruction for PET
US9535560B1 (en) 2010-12-10 2017-01-03 Wyse Technology L.L.C. Methods and systems for facilitating a remote desktop session for a web browser and a remote desktop server
US9245047B2 (en) 2010-12-10 2016-01-26 Wyse Technology L.L.C. Methods and systems for facilitating a remote desktop session utilizing a remote desktop client common interface
US8949726B2 (en) 2010-12-10 2015-02-03 Wyse Technology L.L.C. Methods and systems for conducting a remote desktop session via HTML that supports a 2D canvas and dynamic drawing
US9430036B1 (en) * 2010-12-10 2016-08-30 Wyse Technology L.L.C. Methods and systems for facilitating accessing and controlling a remote desktop of a remote machine in real time by a windows web browser utilizing HTTP
US9244912B1 (en) 2010-12-10 2016-01-26 Wyse Technology L.L.C. Methods and systems for facilitating a remote desktop redrawing session utilizing HTML
US9395885B1 (en) 2010-12-10 2016-07-19 Wyse Technology L.L.C. Methods and systems for a remote desktop session utilizing HTTP header
US20120159292A1 (en) * 2010-12-16 2012-06-21 Oce-Technologies B.V. Method of processing an object-based image file with content type dependent image processing algorithms
KR101424411B1 (en) 2010-12-21 2014-07-28 엠파이어 테크놀로지 디벨롭먼트 엘엘씨 Dummy information for location privacy in location based services
US8549399B2 (en) * 2011-01-18 2013-10-01 Apple Inc. Identifying a selection of content in a structured document
KR101773396B1 (en) * 2011-02-09 2017-08-31 삼성전자주식회사 Graphic Processing Apparatus and Method for Decompressing to Data
US8786619B2 (en) 2011-02-25 2014-07-22 Adobe Systems Incorporated Parallelized definition and display of content in a scripting environment
US9269181B2 (en) * 2011-04-04 2016-02-23 Mitsubishi Electric Corporation Texture mapping device
US8788556B2 (en) * 2011-05-12 2014-07-22 Microsoft Corporation Matrix computation framework
US8933934B1 (en) 2011-06-17 2015-01-13 Rockwell Collins, Inc. System and method for assuring the proper operation of a programmable graphics processing unit
CN103608850B (en) * 2011-06-23 2017-05-10 英特尔公司 Stochastic rasterization with selective culling
CN102270095A (en) * 2011-06-30 2011-12-07 威盛电子股份有限公司 Multiple display control method and system
US9342817B2 (en) 2011-07-07 2016-05-17 Sony Interactive Entertainment LLC Auto-creating groups for sharing photos
US9009670B2 (en) 2011-07-08 2015-04-14 Microsoft Technology Licensing, Llc Automated testing of application program interfaces using genetic algorithms
US9652560B1 (en) 2011-07-18 2017-05-16 Apple Inc. Non-blocking memory management unit
US20130027416A1 (en) * 2011-07-25 2013-01-31 Karthikeyan Vaithianathan Gather method and apparatus for media processing accelerators
EP2754148A4 (en) * 2011-09-06 2015-12-30 Dreamworks Animation Llc Optimizing graph evaluation
WO2013040261A1 (en) * 2011-09-14 2013-03-21 Onlive, Inc. System and method for improving the graphics performance of hosted applications
KR20130045450A (en) * 2011-10-26 2013-05-06 삼성전자주식회사 Graphic processing unit, devices having same, and method of operating same
US20130106887A1 (en) * 2011-10-31 2013-05-02 Christopher Tremblay Texture generation using a transformation matrix
CN103108197A (en) 2011-11-14 2013-05-15 辉达公司 Priority level compression method and priority level compression system for three-dimensional (3D) video wireless display
US10275924B2 (en) * 2011-12-26 2019-04-30 Intel Corporation Techniques for managing three-dimensional graphics display modes
WO2013101167A1 (en) * 2011-12-30 2013-07-04 Intel Corporation Five-dimensional rasterization with conservative bounds
US9829715B2 (en) 2012-01-23 2017-11-28 Nvidia Corporation Eyewear device for transmitting signal and communication method thereof
WO2013130030A1 (en) * 2012-02-27 2013-09-06 Intel Corporation Using cost estimation to improve performance of tile rendering for image processing
US20130235154A1 (en) * 2012-03-09 2013-09-12 Guy Salton-Morgenstern Method and apparatus to minimize computations in real time photo realistic rendering
WO2013148595A2 (en) * 2012-03-26 2013-10-03 Onlive, Inc. System and method for improving the graphics performance of hosted applications
US10559123B2 (en) * 2012-04-04 2020-02-11 Qualcomm Incorporated Patched shading in graphics processing
US9208603B2 (en) * 2012-05-03 2015-12-08 Zemax, Llc Methods and associated systems for simulating illumination patterns
JP5910310B2 (en) * 2012-05-22 2016-04-27 富士通株式会社 Drawing processing apparatus and drawing processing method
US9411595B2 (en) 2012-05-31 2016-08-09 Nvidia Corporation Multi-threaded transactional memory coherence
US9251555B2 (en) 2012-06-08 2016-02-02 2236008 Ontario, Inc. Tiled viewport composition
US8823728B2 (en) 2012-06-08 2014-09-02 Apple Inc. Dynamically generated images and effects
WO2013185062A1 (en) * 2012-06-08 2013-12-12 Advanced Micro Devices, Inc. Graphics library extensions
JP5977591B2 (en) * 2012-06-20 2016-08-24 オリンパス株式会社 Image processing apparatus, imaging apparatus including the same, image processing method, and computer-readable recording medium recording an image processing program
US9495781B2 (en) * 2012-06-21 2016-11-15 Nvidia Corporation Early sample evaluation during coarse rasterization
JP2014006674A (en) * 2012-06-22 2014-01-16 Canon Inc Image processing device, control method of the same and program
US9471967B2 (en) 2012-07-20 2016-10-18 The Board Of Trustees Of The University Of Illinois Relighting fragments for insertion into content
US9105250B2 (en) 2012-08-03 2015-08-11 Nvidia Corporation Coverage compaction
CN102831694B (en) * 2012-08-09 2015-01-14 广州广电运通金融电子股份有限公司 Image identification system and image storage control method
US20140049534A1 (en) * 2012-08-14 2014-02-20 Livermore Software Technology Corp Efficient Method Of Rendering A Computerized Model To Be Displayed On A Computer Monitor
US9578224B2 (en) 2012-09-10 2017-02-21 Nvidia Corporation System and method for enhanced monoimaging
GB2500284B (en) * 2012-09-12 2014-04-30 Imagination Tech Ltd Tile based computer graphics
US9916680B2 (en) * 2012-10-12 2018-03-13 Nvidia Corporation Low-power processing in depth read-only operating regimes
US9002125B2 (en) 2012-10-15 2015-04-07 Nvidia Corporation Z-plane compression with z-plane predictors
US10210956B2 (en) * 2012-10-24 2019-02-19 Cathworks Ltd. Diagnostically useful results in real time
WO2014064702A2 (en) 2012-10-24 2014-05-01 Cathworks Ltd. Automated measurement system and method for coronary artery disease scoring
US8941676B2 (en) * 2012-10-26 2015-01-27 Nvidia Corporation On-chip anti-alias resolve in a cache tiling architecture
US9165399B2 (en) * 2012-11-01 2015-10-20 Nvidia Corporation System, method, and computer program product for inputting modified coverage data into a pixel shader
US9317948B2 (en) 2012-11-16 2016-04-19 Arm Limited Method of and apparatus for processing graphics
US9741154B2 (en) * 2012-11-21 2017-08-22 Intel Corporation Recording the results of visibility tests at the input geometry object granularity
KR102057163B1 (en) * 2012-12-17 2019-12-18 에이알엠 리미티드 Hidden surface removal in graphics processing systems
GB201223089D0 (en) 2012-12-20 2013-02-06 Imagination Tech Ltd Hidden culling in tile based computer generated graphics
US9824009B2 (en) 2012-12-21 2017-11-21 Nvidia Corporation Information coherency maintenance systems and methods
US9082212B2 (en) * 2012-12-21 2015-07-14 Nvidia Corporation Programmable blending via multiple pixel shader dispatches
US10102142B2 (en) 2012-12-26 2018-10-16 Nvidia Corporation Virtual address based memory reordering
US9591309B2 (en) 2012-12-31 2017-03-07 Nvidia Corporation Progressive lossy memory compression
US9607407B2 (en) 2012-12-31 2017-03-28 Nvidia Corporation Variable-width differential memory compression
US9734598B2 (en) * 2013-01-15 2017-08-15 Microsoft Technology Licensing, Llc Engine for streaming virtual textures
DE102013201377A1 (en) * 2013-01-29 2014-07-31 Bayerische Motoren Werke Aktiengesellschaft Method and apparatus for processing 3d image data
US20140225902A1 (en) * 2013-02-11 2014-08-14 Nvidia Corporation Image pyramid processor and method of multi-resolution image processing
US9767600B2 (en) * 2013-03-12 2017-09-19 Nvidia Corporation Target independent rasterization with multiple color samples
US9992021B1 (en) 2013-03-14 2018-06-05 GoTenna, Inc. System and method for private and point-to-point communication between computing devices
US10078911B2 (en) * 2013-03-15 2018-09-18 Nvidia Corporation System, method, and computer program product for executing processes involving at least one primitive in a graphics processor, utilizing a data structure
GB2541084B (en) 2013-03-15 2017-05-17 Imagination Tech Ltd Rendering with point sampling and pre-computed light transport information
US10957094B2 (en) * 2013-03-29 2021-03-23 Advanced Micro Devices, Inc. Hybrid render with preferred primitive batch binning and sorting
US10169906B2 (en) 2013-03-29 2019-01-01 Advanced Micro Devices, Inc. Hybrid render with deferred primitive batch binning
GB2506706B (en) 2013-04-02 2014-09-03 Imagination Tech Ltd Tile-based graphics
EP2801971A1 (en) * 2013-05-10 2014-11-12 Rightware Oy A method of and system for rendering an image
US10008029B2 (en) 2013-05-31 2018-06-26 Nvidia Corporation Updating depth related graphics data
US10204391B2 (en) 2013-06-04 2019-02-12 Arm Limited Method of and apparatus for processing graphics
US9710894B2 (en) 2013-06-04 2017-07-18 Nvidia Corporation System and method for enhanced multi-sample anti-aliasing
US10102603B2 (en) 2013-06-10 2018-10-16 Sony Interactive Entertainment Inc. Scheme for compressing vertex shader output parameters
US10096079B2 (en) 2013-06-10 2018-10-09 Sony Interactive Entertainment Inc. Fragment shaders perform vertex shader computations
US10134102B2 (en) 2013-06-10 2018-11-20 Sony Interactive Entertainment Inc. Graphics processing hardware for using compute shaders as front end for vertex shaders
US10176621B2 (en) * 2013-06-10 2019-01-08 Sony Interactive Entertainment Inc. Using compute shaders as front end for vertex shaders
US9477575B2 (en) 2013-06-12 2016-10-25 Nvidia Corporation Method and system for implementing a multi-threaded API stream replay
US9418400B2 (en) 2013-06-18 2016-08-16 Nvidia Corporation Method and system for rendering simulated depth-of-field visual effect
US9965893B2 (en) * 2013-06-25 2018-05-08 Google Llc. Curvature-driven normal interpolation for shading applications
US9684998B2 (en) * 2013-07-22 2017-06-20 Nvidia Corporation Pixel serialization to improve conservative depth estimation
KR102066659B1 (en) * 2013-08-13 2020-01-15 삼성전자 주식회사 A graphic processing unit, a graphic processing system including the same, and a method of operating the same
US9747658B2 (en) * 2013-09-06 2017-08-29 Apple Inc. Arbitration method for multi-request display pipeline
US9569385B2 (en) 2013-09-09 2017-02-14 Nvidia Corporation Memory transaction ordering
US9292899B2 (en) 2013-09-25 2016-03-22 Apple Inc. Reference frame data prefetching in block processing pipelines
US9224186B2 (en) 2013-09-27 2015-12-29 Apple Inc. Memory latency tolerance in block processing pipelines
US9659393B2 (en) * 2013-10-07 2017-05-23 Intel Corporation Selective rasterization
US20150109486A1 (en) * 2013-10-17 2015-04-23 Nvidia Corporation Filtering extraneous image data in camera systems
US10424063B2 (en) 2013-10-24 2019-09-24 CathWorks, LTD. Vascular characteristic determination with correspondence modeling of a vascular tree
GB2521171B (en) * 2013-12-11 2020-02-05 Advanced Risc Mach Ltd Clipping of graphics primitives
US9569883B2 (en) 2013-12-12 2017-02-14 Intel Corporation Decoupled shading pipeline
US20150179142A1 (en) * 2013-12-20 2015-06-25 Nvidia Corporation System, method, and computer program product for reduced-rate calculation of low-frequency pixel shader intermediate values
US9396585B2 (en) * 2013-12-31 2016-07-19 Nvidia Corporation Generating indirection maps for texture space effects
US11350015B2 (en) 2014-01-06 2022-05-31 Panamorph, Inc. Image processing system and method
WO2015103646A1 (en) * 2014-01-06 2015-07-09 Panamorph, Inc. Image processing system and method
US10935788B2 (en) 2014-01-24 2021-03-02 Nvidia Corporation Hybrid virtual 3D rendering approach to stereovision
US9773342B2 (en) * 2014-01-27 2017-09-26 Nvidia Corporation Barycentric filtering for measured biderectional scattering distribution function
US9842424B2 (en) * 2014-02-10 2017-12-12 Pixar Volume rendering using adaptive buckets
US20150228106A1 (en) * 2014-02-13 2015-08-13 Vixs Systems Inc. Low latency video texture mapping via tight integration of codec engine with 3d graphics engine
KR102111740B1 (en) * 2014-04-03 2020-05-15 삼성전자주식회사 Method and device for processing image data
US20150288767A1 (en) 2014-04-03 2015-10-08 Centurylink Intellectual Property Llc Network Functions Virtualization Interconnection Hub
KR101923562B1 (en) 2014-04-05 2018-11-29 소니 인터랙티브 엔터테인먼트 아메리카 엘엘씨 Method for efficient re-rendering objects to vary viewports and under varying rendering and rasterization parameters
US9495790B2 (en) 2014-04-05 2016-11-15 Sony Interactive Entertainment America Llc Gradient adjustment for texture mapping to non-orthonormal grid
US9865074B2 (en) 2014-04-05 2018-01-09 Sony Interactive Entertainment America Llc Method for efficient construction of high resolution display buffers
US9652882B2 (en) 2014-04-05 2017-05-16 Sony Interactive Entertainment America Llc Gradient adjustment for texture mapping for multiple render targets with resolution that varies by screen location
US10783696B2 (en) 2014-04-05 2020-09-22 Sony Interactive Entertainment LLC Gradient adjustment for texture mapping to non-orthonormal grid
US9836816B2 (en) 2014-04-05 2017-12-05 Sony Interactive Entertainment America Llc Varying effective resolution by screen location in graphics processing by approximating projection of vertices onto curved viewport
US11302054B2 (en) 2014-04-05 2022-04-12 Sony Interactive Entertainment Europe Limited Varying effective resolution by screen location by changing active color sample count within multiple render targets
US10068311B2 (en) 2014-04-05 2018-09-04 Sony Interacive Entertainment LLC Varying effective resolution by screen location by changing active color sample count within multiple render targets
US9710881B2 (en) 2014-04-05 2017-07-18 Sony Interactive Entertainment America Llc Varying effective resolution by screen location by altering rasterization parameters
US9710957B2 (en) * 2014-04-05 2017-07-18 Sony Interactive Entertainment America Llc Graphics processing enhancement by tracking object and/or primitive identifiers
GB2525666B (en) * 2014-05-02 2020-12-23 Advanced Risc Mach Ltd Graphics processing systems
GB2526598B (en) 2014-05-29 2018-11-28 Imagination Tech Ltd Allocation of primitives to primitive blocks
JP6344064B2 (en) * 2014-05-30 2018-06-20 ブラザー工業株式会社 Image processing apparatus and computer program
GB2524121B (en) * 2014-06-17 2016-03-02 Imagination Tech Ltd Assigning primitives to tiles in a graphics processing system
US9307249B2 (en) * 2014-06-20 2016-04-05 Freescale Semiconductor, Inc. Processing device and method of compressing images
US11049269B2 (en) 2014-06-27 2021-06-29 Samsung Electronics Co., Ltd. Motion based adaptive rendering
US20150379682A1 (en) * 2014-06-27 2015-12-31 Samsung Electronics Co., Ltd. Vertex attribute data compression with random access using hardware
US9842428B2 (en) * 2014-06-27 2017-12-12 Samsung Electronics Co., Ltd. Dynamically optimized deferred rendering pipeline
US9830714B2 (en) * 2014-06-27 2017-11-28 Samsung Electronics Co., Ltd. Graphics processing with advection to reconstruct missing sample data points
JP6335335B2 (en) * 2014-06-30 2018-05-30 インテル・コーポレーション Adaptive partition mechanism with arbitrary tile shapes for tile-based rendering GPU architecture
US9832388B2 (en) 2014-08-04 2017-11-28 Nvidia Corporation Deinterleaving interleaved high dynamic range image by using YUV interpolation
US10225327B2 (en) * 2014-08-13 2019-03-05 Centurylink Intellectual Property Llc Remoting application servers
WO2016028293A1 (en) * 2014-08-20 2016-02-25 Landmark Graphics Corporation Optimizing computer hardware resource utilization when processing variable precision data
US9232156B1 (en) 2014-09-22 2016-01-05 Freescale Semiconductor, Inc. Video processing device and method
US9824412B2 (en) * 2014-09-24 2017-11-21 Intel Corporation Position-only shading pipeline
KR102281180B1 (en) 2014-11-21 2021-07-23 삼성전자주식회사 Image processing apparatus and method
US20160155261A1 (en) * 2014-11-26 2016-06-02 Bevelity LLC Rendering and Lightmap Calculation Methods
US9710878B2 (en) 2014-12-17 2017-07-18 Microsoft Technoloy Licensing, LLC Low power DMA labeling
US10181175B2 (en) * 2014-12-17 2019-01-15 Microsoft Technology Licensing, Llc Low power DMA snoop and skip
US10410081B2 (en) * 2014-12-23 2019-09-10 Intel Corporation Method and apparatus for a high throughput rasterizer
JP2016134009A (en) * 2015-01-20 2016-07-25 株式会社ジオ技術研究所 Three-dimensional map display system
US9607414B2 (en) 2015-01-27 2017-03-28 Splunk Inc. Three-dimensional point-in-polygon operation to facilitate displaying three-dimensional structures
US9916326B2 (en) 2015-01-27 2018-03-13 Splunk, Inc. Efficient point-in-polygon indexing technique for facilitating geofencing operations
US9836874B2 (en) * 2015-01-27 2017-12-05 Splunk Inc. Efficient polygon-clipping technique to reduce data transfer requirements for a viewport
US10026204B2 (en) 2015-01-27 2018-07-17 Splunk Inc. Efficient point-in-polygon indexing technique for processing queries over geographic data sets
GB2536964B (en) * 2015-04-02 2019-12-25 Ge Aviat Systems Ltd Avionics display system
US10002404B2 (en) * 2015-04-15 2018-06-19 Mediatek Singapore Pte. Ltd. Optimizing shading process for mixed order-sensitive and order-insensitive shader operations
US10255651B2 (en) 2015-04-15 2019-04-09 Channel One Holdings Inc. Methods and systems for generating shaders to emulate a fixed-function graphics pipeline
US10403025B2 (en) 2015-06-04 2019-09-03 Samsung Electronics Co., Ltd. Automated graphics and compute tile interleave
US10089775B2 (en) 2015-06-04 2018-10-02 Samsung Electronics Co., Ltd. Automated graphics and compute tile interleave
US10535114B2 (en) * 2015-08-18 2020-01-14 Nvidia Corporation Controlling multi-pass rendering sequences in a cache tiling architecture
CN105118089B (en) * 2015-08-19 2018-03-20 上海兆芯集成电路有限公司 Programmable pixel placement method in 3-D graphic pipeline and use its device
US9882833B2 (en) 2015-09-28 2018-01-30 Centurylink Intellectual Property Llc Intent-based services orchestration
US10147222B2 (en) * 2015-11-25 2018-12-04 Nvidia Corporation Multi-pass rendering in a screen space pipeline
US20170154403A1 (en) * 2015-11-30 2017-06-01 Intel Corporation Triple buffered constant buffers for efficient processing of graphics data at computing devices
US9672656B1 (en) * 2015-12-16 2017-06-06 Google Inc. Variable level-of-detail map rendering
US9965417B1 (en) * 2016-01-13 2018-05-08 Xilinx, Inc. Use of interrupt memory for communication via PCIe communication fabric
US9818051B2 (en) * 2016-01-29 2017-11-14 Ricoh Company, Ltd. Rotation and clipping mechanism
GB2546810B (en) * 2016-02-01 2019-10-16 Imagination Tech Ltd Sparse rendering
US9906981B2 (en) 2016-02-25 2018-02-27 Nvidia Corporation Method and system for dynamic regulation and control of Wi-Fi scans
US10096147B2 (en) 2016-03-10 2018-10-09 Qualcomm Incorporated Visibility information modification
US10412130B2 (en) 2016-04-04 2019-09-10 Hanwha Techwin Co., Ltd. Method and apparatus for playing media stream on web browser
GB2553744B (en) 2016-04-29 2018-09-05 Advanced Risc Mach Ltd Graphics processing systems
GB201608101D0 (en) * 2016-05-09 2016-06-22 Magic Pony Technology Ltd Multiscale 3D texture synthesis
JP7036742B2 (en) 2016-05-16 2022-03-15 キャスワークス リミテッド Vascular evaluation system
US10290134B2 (en) * 2016-06-01 2019-05-14 Adobe Inc. Coverage based approach to image rendering using opacity values
US10528607B2 (en) * 2016-07-29 2020-01-07 Splunk Inc. Syntax templates for coding
EP3504684B1 (en) * 2016-08-29 2022-11-16 Advanced Micro Devices, Inc. Hybrid render with preferred primitive batch binning and sorting
US10535186B2 (en) * 2016-08-30 2020-01-14 Intel Corporation Multi-resolution deferred shading using texel shaders in computing environments
US10394990B1 (en) * 2016-09-27 2019-08-27 Altera Corporation Initial condition support for partial reconfiguration
US10756785B2 (en) * 2016-09-29 2020-08-25 Nokia Technologies Oy Flexible reference signal design
US10417134B2 (en) * 2016-11-10 2019-09-17 Oracle International Corporation Cache memory architecture and policies for accelerating graph algorithms
US10282889B2 (en) * 2016-11-29 2019-05-07 Samsung Electronics Co., Ltd. Vertex attribute compression and decompression in hardware
US10402388B1 (en) * 2017-01-31 2019-09-03 Levyx, Inc. Partition-based analytic systems and methods
US10204393B2 (en) * 2017-04-10 2019-02-12 Intel Corporation Pre-pass surface analysis to achieve adaptive anti-aliasing modes
US10192351B2 (en) * 2017-04-17 2019-01-29 Intel Corporation Anti-aliasing adaptive shader with pixel tile coverage raster rule system, apparatus and method
US10482028B2 (en) * 2017-04-21 2019-11-19 Intel Corporation Cache optimization for graphics systems
US10643374B2 (en) * 2017-04-24 2020-05-05 Intel Corporation Positional only shading pipeline (POSH) geometry data processing with coarse Z buffer
US10540287B2 (en) 2017-05-12 2020-01-21 Samsung Electronics Co., Ltd Spatial memory streaming confidence mechanism
CN107680556B (en) * 2017-11-03 2019-08-02 深圳市华星光电半导体显示技术有限公司 A kind of display power-economizing method, device and display
US10599584B2 (en) * 2017-11-07 2020-03-24 Arm Limited Write buffer operation in data processing systems
US10740954B2 (en) 2018-03-17 2020-08-11 Nvidia Corporation Shadow denoising in ray-tracing applications
JP7119081B2 (en) 2018-05-24 2022-08-16 株式会社Preferred Networks Projection data generation device, three-dimensional model, projection data generation method, neural network generation method and program
US10991079B2 (en) 2018-08-14 2021-04-27 Nvidia Corporation Using previously rendered scene frames to reduce pixel noise
US10950305B1 (en) * 2018-11-02 2021-03-16 Facebook Technologies, Llc Selective pixel output
KR102589969B1 (en) 2018-11-05 2023-10-16 삼성전자주식회사 Graphics processing unit, graphics processing system and graphics processing method of performing interpolation in deferred shading
CN109710227B (en) * 2018-11-07 2022-05-24 苏州蜗牛数字科技股份有限公司 Method for scheduling texture atlas
EP3671651A1 (en) 2018-12-21 2020-06-24 Imagination Technologies Limited Primitive block generator for graphics processing systems
US10699475B1 (en) * 2018-12-28 2020-06-30 Intel Corporation Multi-pass apparatus and method for early termination of graphics shading
EP3690575B1 (en) * 2019-02-04 2022-08-24 Siemens Aktiengesellschaft Planning system, method for testing a consistent detection of pipes in a planning system, and control program
US11620478B2 (en) * 2019-02-06 2023-04-04 Texas Instruments Incorporated Semantic occupancy grid management in ADAS/autonomous driving
US11227430B2 (en) * 2019-06-19 2022-01-18 Samsung Electronics Co., Ltd. Optimized pixel shader attribute management
US11488349B2 (en) 2019-06-28 2022-11-01 Ati Technologies Ulc Method and apparatus for alpha blending images from different color formats
US10981059B2 (en) * 2019-07-03 2021-04-20 Sony Interactive Entertainment LLC Asset aware computing architecture for graphics processing
US10937233B2 (en) * 2019-07-22 2021-03-02 Arm Limited Graphics processing systems
CN110686652B (en) * 2019-09-16 2021-07-06 武汉科技大学 Depth measurement method based on combination of depth learning and structured light
US11429690B2 (en) * 2019-10-10 2022-08-30 Hover, Inc. Interactive path tracing on the web
CN111062856B (en) * 2019-11-18 2023-10-20 中国航空工业集团公司西安航空计算技术研究所 Optimized OpenGL graphic attribute arrangement method
US11170555B2 (en) 2019-11-27 2021-11-09 Arm Limited Graphics processing systems
US11216993B2 (en) 2019-11-27 2022-01-04 Arm Limited Graphics processing systems
US11210847B2 (en) 2019-11-27 2021-12-28 Arm Limited Graphics processing systems
US11210821B2 (en) * 2019-11-27 2021-12-28 Arm Limited Graphics processing systems
US11243882B2 (en) * 2020-04-15 2022-02-08 International Business Machines Corporation In-array linked list identifier pool scheme
US11574249B2 (en) * 2020-06-02 2023-02-07 International Business Machines Corporation Streamlining data processing optimizations for machine learning workloads
US11417073B2 (en) * 2020-07-16 2022-08-16 Cesium GS, Inc. System and method for generating hierarchical level-of-detail measurements for runtime calculation and visualization
TWI756771B (en) * 2020-08-05 2022-03-01 偉詮電子股份有限公司 Image transformation method
TWI779336B (en) * 2020-08-24 2022-10-01 宏碁股份有限公司 Display system and method of displaying autostereoscopic image
WO2022086795A1 (en) * 2020-10-22 2022-04-28 Zazzle Inc. System and method for high quality renderings of synthetic views of custom products
US20220246081A1 (en) * 2021-01-05 2022-08-04 Google Llc Hidden display interfaces and associated systems and methods
GB2599185B (en) * 2021-03-23 2022-08-24 Imagination Tech Ltd Intersection testing in a ray tracing system
GB2599186B (en) * 2021-03-23 2022-10-12 Imagination Tech Ltd Intersection testing in a ray tracing system
GB2599181B (en) 2021-03-23 2022-11-16 Imagination Tech Ltd Intersection testing in a ray tracing system
GB2599184B (en) 2021-03-23 2022-11-23 Imagination Tech Ltd Intersection testing in a ray tracing system
GB2607002A (en) * 2021-05-11 2022-11-30 Advanced Risc Mach Ltd Fragment dependency management for variable rate shading
EP4094815A3 (en) * 2021-05-28 2022-12-07 Bidstack Group PLC Viewability testing in a computer-generated environment
JP2023000232A (en) * 2021-06-17 2023-01-04 富士通株式会社 Data processing program, data processing method, and data processing system
US20230334728A1 (en) * 2022-04-15 2023-10-19 Meta Platforms Technologies, Llc Destination Update for Blending Modes in a Graphics Pipeline
US11882295B2 (en) 2022-04-15 2024-01-23 Meta Platforms Technologies, Llc Low-power high throughput hardware decoder with random block access
CN116263981B (en) * 2022-04-20 2023-11-17 象帝先计算技术(重庆)有限公司 Graphics processor, system, apparatus, device, and method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE36145E (en) * 1991-04-30 1999-03-16 Optigraphics Corporation System for managing tiled images using multiple resolutions
US5977977A (en) * 1995-08-04 1999-11-02 Microsoft Corporation Method and system for multi-pass rendering

Family Cites Families (132)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2353185A1 (en) 1976-04-09 1977-12-23 Thomson Csf RAPID CORRELATOR DEVICE, AND SYSTEM FOR PROCESSING THE SIGNALS OF A RECEIVER INCLUDING SUCH A DEVICE
FR2481489A1 (en) 1980-04-25 1981-10-30 Thomson Csf BIDIMENSIONAL CORRELATOR DEVICE
US4484346A (en) 1980-08-15 1984-11-20 Sternberg Stanley R Neighborhood transformation logic circuitry for an image analyzer system
US4559618A (en) 1982-09-13 1985-12-17 Data General Corp. Content-addressable memory module with associative clear
US4783829A (en) * 1983-02-23 1988-11-08 Hitachi, Ltd. Pattern recognition apparatus
US4581760A (en) 1983-04-27 1986-04-08 Fingermatrix, Inc. Fingerprint verification method
US4670858A (en) 1983-06-07 1987-06-02 Tektronix, Inc. High storage capacity associative memory
US4594673A (en) 1983-06-28 1986-06-10 Gti Corporation Hidden surface processor
US4532606A (en) 1983-07-14 1985-07-30 Burroughs Corporation Content addressable memory cell with shift capability
US4564952A (en) 1983-12-08 1986-01-14 At&T Bell Laboratories Compensation of filter symbol interference by adaptive estimation of received symbol sequences
US4694404A (en) 1984-01-12 1987-09-15 Key Bank N.A. High-speed image generation of complex solid objects using octree encoding
EP0166577A3 (en) 1984-06-21 1987-10-14 Advanced Micro Devices, Inc. Information sorting and storage apparatus and method
US4794559A (en) 1984-07-05 1988-12-27 American Telephone And Telegraph Company, At&T Bell Laboratories Content addressable semiconductor memory arrays
US4622653A (en) 1984-10-29 1986-11-11 Texas Instruments Incorporated Block associative memory
US4669054A (en) 1985-05-03 1987-05-26 General Dynamics, Pomona Division Device and method for optically correlating a pair of images
SE445154B (en) 1985-07-08 1986-06-02 Ibm Svenska Ab METHOD OF REMOVING HIDDEN LINES
US4695973A (en) 1985-10-22 1987-09-22 The United States Of America As Represented By The Secretary Of The Air Force Real-time programmable optical correlator
US4758982A (en) 1986-01-08 1988-07-19 Advanced Micro Devices, Inc. Quasi content addressable memory
US4890242A (en) 1986-06-05 1989-12-26 Xox Corporation Solid-modeling system using topology directed subdivision for determination of surface intersections
US5067162A (en) 1986-06-30 1991-11-19 Identix Incorporated Method and apparatus for verifying identity using image correlation
US4998286A (en) 1987-02-13 1991-03-05 Olympus Optical Co., Ltd. Correlation operational apparatus for multi-dimensional images
US4825391A (en) 1987-07-20 1989-04-25 General Electric Company Depth buffer priority processing for real time computer image generating systems
US5146592A (en) 1987-09-14 1992-09-08 Visual Information Technologies, Inc. High speed image processing computer with overlapping windows-div
US5129060A (en) 1987-09-14 1992-07-07 Visual Information Technologies, Inc. High speed image processing computer
US4841467A (en) 1987-10-05 1989-06-20 General Electric Company Architecture to implement floating point multiply/accumulate operations
GB2215623B (en) 1987-10-23 1991-07-31 Rotation Limited Apparatus for playing a game for one or more players and to games played with the apparatus
US4945500A (en) 1987-11-04 1990-07-31 Schlumberger Technologies, Inc. Triangle processor for 3-D graphics display system
US4888712A (en) 1987-11-04 1989-12-19 Schlumberger Systems, Inc. Guardband clipping method and apparatus for 3-D graphics display system
FR2625345A1 (en) 1987-12-24 1989-06-30 Thomson Cgr THREE-DIMENSIONAL VIEWING METHOD OF NUMERICALLY ENCODED OBJECTS IN TREE FORM AND DEVICE FOR IMPLEMENTING THE SAME
US5040223A (en) 1988-02-17 1991-08-13 Nippondenso Co., Ltd. Fingerprint verification method employing plural correlation judgement levels and sequential judgement stages
US4888583A (en) 1988-03-14 1989-12-19 Ligocki Terry J Method and apparatus for rendering an image from data arranged in a constructive solid geometry format
US5083287A (en) 1988-07-14 1992-01-21 Daikin Industries, Inc. Method and apparatus for applying a shadowing operation to figures to be drawn for displaying on crt-display
US5133052A (en) 1988-08-04 1992-07-21 Xerox Corporation Interactive graphical search and replace utility for computer-resident synthetic graphic image editors
US4996666A (en) 1988-08-12 1991-02-26 Duluk Jr Jerome F Content-addressable memory system capable of fully parallel magnitude comparisons
GB8828342D0 (en) * 1988-12-05 1989-01-05 Rediffusion Simulation Ltd Image generator
US4970636A (en) 1989-01-23 1990-11-13 Honeywell Inc. Memory interface controller
FR2646046B1 (en) 1989-04-18 1995-08-25 France Etat METHOD AND DEVICE FOR COMPRESSING IMAGE DATA BY MATHEMATICAL TRANSFORMATION WITH REDUCED COST OF IMPLEMENTATION, IN PARTICULAR FOR TRANSMISSION AT REDUCED THROUGHPUT OF IMAGE SEQUENCES
JPH0776991B2 (en) 1989-10-24 1995-08-16 インターナショナル・ビジネス・マシーンズ・コーポレーション NURBS data conversion method and apparatus
US5245700A (en) 1989-11-21 1993-09-14 International Business Machines Corporation Adjustment of z-buffer values for lines on the surface of a polygon
JPH03166601A (en) 1989-11-27 1991-07-18 Hitachi Ltd Symbolizing device and process controller and control supporting device using the symbolizing device
US5129051A (en) 1990-03-16 1992-07-07 Hewlett-Packard Company Decomposition of arbitrary polygons into trapezoids
US5123085A (en) 1990-03-19 1992-06-16 Sun Microsystems, Inc. Method and apparatus for rendering anti-aliased polygons
US5128888A (en) 1990-04-02 1992-07-07 Advanced Micro Devices, Inc. Arithmetic unit having multiple accumulators
GB9009127D0 (en) 1990-04-24 1990-06-20 Rediffusion Simulation Ltd Image generator
US5369734A (en) 1990-05-18 1994-11-29 Kabushiki Kaisha Toshiba Method for processing and displaying hidden-line graphic images
DE69122557T2 (en) 1990-06-29 1997-04-24 Philips Electronics Nv Imaging
JPH0475183A (en) 1990-07-17 1992-03-10 Mitsubishi Electric Corp Correlativity detector for image
US5054090A (en) 1990-07-20 1991-10-01 Knight Arnold W Fingerprint correlation system with parallel FIFO processor
US5050220A (en) 1990-07-24 1991-09-17 The United States Of America As Represented By The Secretary Of The Navy Optical fingerprint correlator
JPH07120435B2 (en) 1990-12-06 1995-12-20 インターナショナル・ビジネス・マシーンズ・コーポレイション Method and system for initializing and updating high-speed Z buffer
FR2670923A1 (en) 1990-12-21 1992-06-26 Philips Lab Electronique CORRELATION DEVICE.
JPH07122908B2 (en) 1991-03-12 1995-12-25 インターナショナル・ビジネス・マシーンズ・コーポレイション Apparatus and method for generating displayable information representing a three-dimensional solid object
US5289567A (en) 1991-04-01 1994-02-22 Digital Equipment Corporation Computer apparatus and method for finite element identification in interactive modeling
US5293467A (en) 1991-04-03 1994-03-08 Buchner Gregory C Method for resolving priority between a calligraphically-displayed point feature and both raster-displayed faces and other calligraphically-displayed point features in a CIG system
US5315537A (en) 1991-04-08 1994-05-24 Blacker Teddy D Automated quadrilateral surface discretization method and apparatus usable to generate mesh in a finite element analysis system
US5347619A (en) 1991-04-30 1994-09-13 International Business Machines Corporation Nonconvex polygon identifier
US5299139A (en) 1991-06-21 1994-03-29 Cadence Design Systems, Inc. Short locator method
US5493644A (en) 1991-07-11 1996-02-20 Hewlett-Packard Company Polygon span interpolator with main memory Z buffer
US5295235A (en) 1992-02-14 1994-03-15 Steve Newman Polygon engine for updating computer graphic display employing compressed bit map data
US5319743A (en) 1992-04-02 1994-06-07 Digital Equipment Corporation Intelligent and compact bucketing method for region queries in two-dimensional space
US5669010A (en) 1992-05-18 1997-09-16 Silicon Engines Cascaded two-stage computational SIMD engine having multi-port memory and multiple arithmetic units
WO1993023816A1 (en) 1992-05-18 1993-11-25 Silicon Engines Inc. System and method for cross correlation with application to video motion vector estimation
US5621866A (en) 1992-07-24 1997-04-15 Fujitsu Limited Image processing apparatus having improved frame buffer with Z buffer and SAM port
US5455900A (en) 1992-10-20 1995-10-03 Ricoh Company, Ltd. Image processing apparatus
US5388206A (en) 1992-11-13 1995-02-07 The University Of North Carolina Architecture and apparatus for image generation
TW241196B (en) 1993-01-15 1995-02-21 Du Pont
JP3240447B2 (en) 1993-02-19 2001-12-17 株式会社リコー Image processing device
US5574835A (en) 1993-04-06 1996-11-12 Silicon Engines, Inc. Bounding box and projections detection of hidden polygons in three-dimensional spatial databases
US5509110A (en) 1993-04-26 1996-04-16 Loral Aerospace Corporation Method for tree-structured hierarchical occlusion in image generators
US6167143A (en) 1993-05-03 2000-12-26 U.S. Philips Corporation Monitoring system
US5684939A (en) 1993-07-09 1997-11-04 Silicon Graphics, Inc. Antialiased imaging with improved pixel supersampling
US5579455A (en) * 1993-07-30 1996-11-26 Apple Computer, Inc. Rendering of 3D scenes on a display using hierarchical z-buffer visibility
GB9316214D0 (en) 1993-08-05 1993-09-22 Philips Electronics Uk Ltd Image processing
JPH07182537A (en) 1993-12-21 1995-07-21 Toshiba Corp Device and method for plotting graphic
US5699497A (en) 1994-02-17 1997-12-16 Evans & Sutherland Computer Corporation Rendering global macro texture, for producing a dynamic image, as on computer generated terrain, seen from a moving viewpoint
US5778245A (en) * 1994-03-01 1998-07-07 Intel Corporation Method and apparatus for dynamic allocation of multiple buffers in a processor
US5623628A (en) * 1994-03-02 1997-04-22 Intel Corporation Computer system and method for maintaining memory consistency in a pipelined, non-blocking caching bus request queue
US5546194A (en) 1994-03-23 1996-08-13 Videofaxx, Inc. Method and apparatus for converting a video image format to a group III fax format
US5596686A (en) 1994-04-21 1997-01-21 Silicon Engines, Inc. Method and apparatus for simultaneous parallel query graphics rendering Z-coordinate buffer
US5544306A (en) 1994-05-03 1996-08-06 Sun Microsystems, Inc. Flexible dram access in a frame buffer memory and system
JPH0855239A (en) 1994-07-21 1996-02-27 Internatl Business Mach Corp <Ibm> Method and apparatus for judgment of visibility of graphicalobject
US5572634A (en) 1994-10-26 1996-11-05 Silicon Engines, Inc. Method and apparatus for spatial simulation acceleration
US5798770A (en) 1995-03-24 1998-08-25 3Dlabs Inc. Ltd. Graphics rendering system with reconfigurable pipeline sequence
US5710876A (en) 1995-05-25 1998-01-20 Silicon Graphics, Inc. Computer graphics system for rendering images using full spectral illumination data
JPH08329276A (en) 1995-06-01 1996-12-13 Ricoh Co Ltd Three-dimensional graphic processor
EP0840915A4 (en) 1995-07-26 1998-11-04 Raycer Inc Method and apparatus for span sorting rendering system
US5841447A (en) 1995-08-02 1998-11-24 Evans & Sutherland Computer Corporation System and method for improving pixel update performance
US5990904A (en) 1995-08-04 1999-11-23 Microsoft Corporation Method and system for merging pixel fragments in a graphics rendering system
US5864342A (en) 1995-08-04 1999-01-26 Microsoft Corporation Method and system for rendering graphical objects to image chunks
US5949428A (en) 1995-08-04 1999-09-07 Microsoft Corporation Method and apparatus for resolving pixel data in a graphics rendering system
US5767859A (en) 1995-09-28 1998-06-16 Hewlett-Packard Company Method and apparatus for clipping non-planar polygons
US5854631A (en) 1995-11-22 1998-12-29 Silicon Graphics, Inc. System and method for merging pixel fragments based on depth range values
US6331856B1 (en) 1995-11-22 2001-12-18 Nintendo Co., Ltd. Video game system with coprocessor providing high speed efficient 3D graphics and digital audio signal processing
US5574836A (en) 1996-01-22 1996-11-12 Broemmelsiek; Raymond M. Interactive display apparatus and method with viewer position compensation
US5850225A (en) 1996-01-24 1998-12-15 Evans & Sutherland Computer Corp. Image mapping system and process using panel shear transforms
US6046746A (en) 1996-07-01 2000-04-04 Sun Microsystems, Inc. Method and apparatus implementing high resolution rendition of Z-buffered primitives
US5751291A (en) 1996-07-26 1998-05-12 Hewlett-Packard Company System and method for accelerated occlusion culling
US5828382A (en) * 1996-08-02 1998-10-27 Cirrus Logic, Inc. Apparatus for dynamic XY tiled texture caching
US5767589A (en) 1996-09-03 1998-06-16 Maximum Products Inc. Lighting control circuit for vehicle brake light/tail light/indicator light assembly
US5860158A (en) 1996-11-15 1999-01-12 Samsung Electronics Company, Ltd. Cache control unit with a cache request transaction-oriented protocol
US6167486A (en) 1996-11-18 2000-12-26 Nec Electronics, Inc. Parallel access virtual channel memory system with cacheable channels
US5936629A (en) 1996-11-20 1999-08-10 International Business Machines Corporation Accelerated single source 3D lighting mechanism
US6111582A (en) * 1996-12-20 2000-08-29 Jenkins; Barry L. System and method of image generation and encoding using primitive reprojection
US6697063B1 (en) 1997-01-03 2004-02-24 Nvidia U.S. Investment Company Rendering pipeline
US5852451A (en) * 1997-01-09 1998-12-22 S3 Incorporation Pixel reordering for improved texture mapping
US5949426A (en) * 1997-01-28 1999-09-07 Integrated Device Technology, Inc. Non-linear texture map blending
US5880736A (en) 1997-02-28 1999-03-09 Silicon Graphics, Inc. Method system and computer program product for shading
US5949424A (en) 1997-02-28 1999-09-07 Silicon Graphics, Inc. Method, system, and computer program product for bump mapping in tangent space
US6259452B1 (en) 1997-04-14 2001-07-10 Massachusetts Institute Of Technology Image drawing system and method with real-time occlusion culling
US6084591A (en) * 1997-04-29 2000-07-04 Ati Technologies, Inc. Method and apparatus for deferred video rendering
US5889997A (en) 1997-05-30 1999-03-30 Hewlett-Packard Company Assembler system and method for a geometry accelerator
US5920326A (en) 1997-05-30 1999-07-06 Hewlett Packard Company Caching and coherency control of multiple geometry accelerators in a computer graphics system
US6002412A (en) 1997-05-30 1999-12-14 Hewlett-Packard Co. Increased performance of graphics memory using page sorting fifos
US5997977A (en) 1997-06-05 1999-12-07 Hoya Corporation Information recording substrate and information recording medium prepared from the substrate
US6118452A (en) 1997-08-05 2000-09-12 Hewlett-Packard Company Fragment visibility pretest system and methodology for improved performance of a graphics system
US6002410A (en) * 1997-08-25 1999-12-14 Chromatic Research, Inc. Reconfigurable texture cache
US6128000A (en) 1997-10-15 2000-10-03 Compaq Computer Corporation Full-scene antialiasing using improved supersampling techniques
US6204859B1 (en) * 1997-10-15 2001-03-20 Digital Equipment Corporation Method and apparatus for compositing colors of images with memory constraints for storing pixel data
US6201540B1 (en) 1998-01-07 2001-03-13 Microsoft Corporation Graphical interface components for in-dash automotive accessories
US6259460B1 (en) 1998-03-26 2001-07-10 Silicon Graphics, Inc. Method for efficient handling of texture cache misses by recirculation
US6246415B1 (en) 1998-04-30 2001-06-12 Silicon Graphics, Inc. Method and apparatus for culling polygons
US6243744B1 (en) 1998-05-26 2001-06-05 Compaq Computer Corporation Computer network cluster generation indicator
US6650327B1 (en) 1998-06-16 2003-11-18 Silicon Graphics, Inc. Display system having floating point rasterization and floating point framebuffering
US6216004B1 (en) 1998-06-23 2001-04-10 Qualcomm Incorporated Cellular communication system with common channel soft handoff and associated method
US6263493B1 (en) 1998-07-08 2001-07-17 International Business Machines Corporation Method and system for controlling the generation of program statements
US6771264B1 (en) 1998-08-20 2004-08-03 Apple Computer, Inc. Method and apparatus for performing tangent space lighting and bump mapping in a deferred shading graphics processor
WO2000011607A1 (en) 1998-08-20 2000-03-02 Apple Computer, Inc. Deferred shading graphics pipeline processor
US6577317B1 (en) 1998-08-20 2003-06-10 Apple Computer, Inc. Apparatus and method for geometry operations in a 3D-graphics pipeline
US6275235B1 (en) 1998-12-21 2001-08-14 Silicon Graphics, Inc. High precision texture wrapping method and device
US6228730B1 (en) 1999-04-28 2001-05-08 United Microelectronics Corp. Method of fabricating field effect transistor
US6671747B1 (en) 2000-08-03 2003-12-30 Apple Computer, Inc. System, apparatus, method, and computer program for execution-order preserving uncached write combine operation
FR2814216B1 (en) * 2000-09-18 2002-12-20 Snecma Moteurs ORIENTATION DEVICE AND ON-BOARD ORIENTATION SYSTEM

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE36145E (en) * 1991-04-30 1999-03-16 Optigraphics Corporation System for managing tiled images using multiple resolutions
US5977977A (en) * 1995-08-04 1999-11-02 Microsoft Corporation Method and system for multi-pass rendering

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7023437B1 (en) 1998-07-22 2006-04-04 Nvidia Corporation System and method for accelerating graphics processing using a post-geometry data stream during multiple-pass rendering
US7170513B1 (en) 1998-07-22 2007-01-30 Nvidia Corporation System and method for display list occlusion branching
US7209140B1 (en) 1999-12-06 2007-04-24 Nvidia Corporation System, method and article of manufacture for a programmable vertex processing model with instruction set
US6844880B1 (en) 1999-12-06 2005-01-18 Nvidia Corporation System, method and computer program product for an improved programmable vertex processing model with instruction set
US6870540B1 (en) * 1999-12-06 2005-03-22 Nvidia Corporation System, method and computer program product for a programmable pixel processing model with instruction set
US7002588B1 (en) 1999-12-06 2006-02-21 Nvidia Corporation System, method and computer program product for branching during programmable vertex processing
US6664963B1 (en) 2000-05-31 2003-12-16 Nvidia Corporation System, method and computer program product for programmable shading using pixel shaders
US6690372B2 (en) 2000-05-31 2004-02-10 Nvidia Corporation System, method and article of manufacture for shadow mapping
US6734861B1 (en) 2000-05-31 2004-05-11 Nvidia Corporation System, method and article of manufacture for an interlock module in a computer graphics processing pipeline
US6532013B1 (en) 2000-05-31 2003-03-11 Nvidia Corporation System, method and article of manufacture for pixel shaders for programmable shading
US7068272B1 (en) 2000-05-31 2006-06-27 Nvidia Corporation System, method and article of manufacture for Z-value and stencil culling prior to rendering in a computer graphics processing pipeline
US6778181B1 (en) 2000-12-07 2004-08-17 Nvidia Corporation Graphics processing system having a virtual texturing array
US7006101B1 (en) 2001-06-08 2006-02-28 Nvidia Corporation Graphics API with branching capabilities
US6982718B2 (en) 2001-06-08 2006-01-03 Nvidia Corporation System, method and computer program product for programmable fragment processing in a graphics pipeline
US7162716B2 (en) 2001-06-08 2007-01-09 Nvidia Corporation Software emulator for optimizing application-programmable vertex processing
US6697064B1 (en) 2001-06-08 2004-02-24 Nvidia Corporation System, method and computer program product for matrix tracking during vertex processing in a graphics pipeline
US7286133B2 (en) 2001-06-08 2007-10-23 Nvidia Corporation System, method and computer program product for programmable fragment processing
US7456838B1 (en) 2001-06-08 2008-11-25 Nvidia Corporation System and method for converting a vertex program to a binary format capable of being executed by a hardware graphics pipeline
US6704025B1 (en) 2001-08-31 2004-03-09 Nvidia Corporation System and method for dual-depth shadow-mapping
US7009615B1 (en) 2001-11-30 2006-03-07 Nvidia Corporation Floating point buffer system and method for use during programmable fragment processing in a graphics pipeline
US7009605B2 (en) 2002-03-20 2006-03-07 Nvidia Corporation System, method and computer program product for generating a shader program
US8106904B2 (en) 2002-03-20 2012-01-31 Nvidia Corporation Shader program generation system and method
CN102835119A (en) * 2010-04-01 2012-12-19 英特尔公司 A multi-core processor supporting real-time 3D image rendering on an autostereoscopic display
CN102835119B (en) * 2010-04-01 2016-02-03 英特尔公司 Support the multi-core processor that the real-time 3D rendering on automatic stereoscopic display device is played up

Also Published As

Publication number Publication date
US7808503B2 (en) 2010-10-05
US20070165035A1 (en) 2007-07-19
US6552723B1 (en) 2003-04-22
WO2000011602A9 (en) 2000-09-08
WO2000011603A9 (en) 2000-09-08
AU5686299A (en) 2000-03-14
AU5580799A (en) 2000-03-14
WO2000011562B1 (en) 2000-05-04
US20030067468A1 (en) 2003-04-10
WO2000011607A1 (en) 2000-03-02
US6229553B1 (en) 2001-05-08
US6693639B2 (en) 2004-02-17
WO2000011607B1 (en) 2000-05-04
US6664959B2 (en) 2003-12-16
WO2000011607A8 (en) 2000-06-08
WO2000010372A2 (en) 2000-03-02
US6525737B1 (en) 2003-02-25
AU5686199A (en) 2000-03-14
WO2000011603A2 (en) 2000-03-02
US7164426B1 (en) 2007-01-16
AU5688199A (en) 2000-03-14
WO2000011602A2 (en) 2000-03-02
US20020196251A1 (en) 2002-12-26
US6268875B1 (en) 2001-07-31
US6577305B1 (en) 2003-06-10
US6288730B1 (en) 2001-09-11
US6476807B1 (en) 2002-11-05

Similar Documents

Publication Publication Date Title
US6577305B1 (en) Apparatus and method for performing setup operations in a 3-D graphics pipeline using unified primitive descriptors
JP4205327B2 (en) Volume dataset rendering method and system
Everitt Interactive order-independent transparency
US5307450A (en) Z-subdivision for improved texture mapping
JP3344597B2 (en) Method and apparatus for tessellating graphic images
EP1323131B1 (en) Method and apparatus for anti-aliasing supersampling
US8059119B2 (en) Method for detecting border tiles or border pixels of a primitive for tile-based rendering
Gumhold Splatting illuminated ellipsoids with depth correction.
US9336624B2 (en) Method and system for rendering 3D distance fields
US5224208A (en) Gradient calculation for texture mapping
US6717576B1 (en) Deferred shading graphics pipeline processor having advanced features
Zhang et al. Conservative voxelization
US20050068333A1 (en) Image processing apparatus and method of same
Westermann et al. Real‐time volume deformations
US6636232B2 (en) Polygon anti-aliasing with any number of samples on an irregular sample grid using a hierarchical tiler
US6573893B1 (en) Voxel transfer circuit for accelerated volume rendering of a graphics image
Huang et al. Fastsplats: Optimized splatting on rectilinear grids
EP1519317B1 (en) Depth-based antialiasing
US5926182A (en) Efficient rendering utilizing user defined shields and windows
US4930091A (en) Triangle classification setup method and apparatus for 3-D graphics display system
Deakin et al. Efficient ray casting of volumetric images using distance maps for empty space skipping
GB2228850A (en) Hidden surface removal using depth data
US5926183A (en) Efficient rendering utilizing user defined rooms and windows
JPH06215143A (en) Method and apparatus for representation of graphics object
JP4060375B2 (en) Spotlight characteristic forming method and image processing apparatus using the same

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): GB JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
AK Designated states

Kind code of ref document: B1

Designated state(s): GB JP

AL Designated countries for regional patents

Kind code of ref document: B1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

B Later publication of amended claims
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase