|Publication number||US7567248 B1|
|Application number||US 11/119,302|
|Publication date||Jul 28, 2009|
|Filing date||Apr 28, 2005|
|Priority date||Apr 28, 2004|
|Publication number||11119302, 119302, US 7567248 B1, US 7567248B1, US-B1-7567248, US7567248 B1, US7567248B1|
|Inventors||William R. Mark, Gregory S. Johnson, Chris Burns|
|Original Assignee||Mark William R, Johnson Gregory S, Chris Burns|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (51), Non-Patent Citations (14), Referenced by (20), Classifications (11), Legal Events (2)|
|External Links: USPTO, USPTO Assignment, Espacenet|
This application claims priority from, and the benefit of, U.S. Provisional Patent Application Ser. No. 60/565,969, entitled “System and method for efficiently computing intersections between rays and surfaces”, filed on Apr. 28, 2004, the contents of which are expressly incorporated herein by reference in their entirety.
The present invention relates to computer graphics, and more particularly to efficient computation of intersections between ray and surfaces in interactive three-dimensional (3D) rendering systems. These intersection tests may be used, for example, for visible surface determination and for shadow computations.
During graphics processing, a computer is commonly used to display three-dimensional representations of an object on a two-dimensional display screen. In a typical graphics computer, an object to be rendered is divided into a plurality of graphics primitives. The graphics primitives are basic components of a graphics picture and may be defined by geometry such as a point, line, vector, or polygon, such as a triangle.
To produce an image for the two-dimensional display screen, the following two steps are typically performed, as well as others not described in detail here.
These two tasks and others can be considered to be specific cases of the problem of determining which surfaces are visible (i.e. intersected first) along a specific a set of rays.
The prior art consists of a wide variety of approaches for performing visibility tests, including:
All publications, patent applications, patents, and other references mentioned herein are incorporated by reference in their entirety.
The ray tracing approach is highly general, but it is computationally expensive. The ray tracing approach has the additional disadvantage that the rendering system maintains a large spatially sorted data structure containing the graphics primitives. For these reasons, the ray tracing approach has not been widely used by interactive rendering systems.
The conventional Z-buffer approach is the approach most commonly used in real-time rendering systems. However, it can only evaluate a restricted set of visibility queries: those in which all rays share a common origin and whose directions are regularly spaced. This restriction is equivalent to stating that an image generated using this technique must have its sample points located in a regular pattern such as a grid, as shown in
The shadow volume technique was developed to avoid the objectionable errors of shadow mapping. Although the shadow volume technique has been used occasionally in commercial systems, it is widely considered to require excessive computational resources and to be particularly difficult to integrate into a complete and flexible image generation system.
Therefore, a need exists for a method and apparatus for computing visibility that is more efficient than ray tracing and shadow volumes; that produces fewer visual errors than shadow mapping; that is simpler to integrate into a complete system than shadow volumes; and that supports a more flexible set of visibility queries than the conventional Z-buffer. More specifically, a need exists for a method and apparatus for computing visibility that runs at real-time frame rates and whose programming interface is similar to that of the conventional Z-buffer, but that allows visibility to be computed for an irregularly arranged set of points on the image plane as shown in
The present invention provides a system and method for determining in a computer system the intersections between a plurality of rays residing in three dimensional (3D) space and one or more surface elements residing in the 3D space. The rays or lines corresponding to the rays share a common intersection point. The method includes for each of the rays, storing the intersection point of the ray with a projection surface in the 3D space in a data structure in the computer system. For each of the surface elements, determining using projection a two-dimensional (2D) region representing the projection of the surface element onto the projection surface; and determining using intersection testing which points stored in the data structure are inside the 2D region. The points determined to be inside the 2D region represent intersection points between the surface element and the rays corresponding to the points.
The present invention may provide a new method and system for computing intersections between rays and surfaces. In one embodiment, computations of intersections between rays and surfaces are used for computing shadows in an interactive rendering system.
The following description consists of four major parts:
The present invention computes shadows using an improvement on the conventional shadow-mapping approach. In one embodiment, the shadow-map samples are taken at the desired points on the shadow-map image plane rather than at points on a pre-defined regular grid as the prior-art shadow mapping technique does. By taking samples at the desired points, this embodiment eliminates the aliasing artifacts produced by prior-art shadow mapping techniques.
For eye-view visibility, all of the rays share a single origin, the eye viewpoint 600. For light-view visibility, all of the rays share a single origin, as long as the light is treated as a point light source as it is in
In the case where all visibility-test rays share a common origin, this origin is typically used to define the center of projection of an image plane. With this definition, the ray directions may be interpreted as sample points on the image plane.
The overall shadow computation procedure works as follows:
First, an image is generated from the Eye Viewpoint 600 to determine the set of 3D points 606 for which shadow information is needed. This image is generated using the conventional Z-buffer rendering algorithm, which uses Sample Points 608 located on a regular grid in the Eye-view Image Plane 602. The result of this step is a 2D array of depth values (900 in
Second, the points 606 are projected onto the shadow-map image plane 604 using light viewpoint 612 as the center of projection. The purpose of this step is to determine the 2D location on the shadow-map image plane 604 of each sample that is desired. These sample locations 610 are stored in a 2D data structure that allows efficient range queries, as described in the next subsection. In one embodiment, this data structure is stored in memory as Physical Grid 700 and Node Array 710 (see
Third, the ray/surface intersection technique described in the next subsection is used to produce a shadow-map image with depth samples at exactly the desired locations in the shadow-map image plane. At the end of this step, the system holds a depth value for each sample point 610 on the shadow-map image plane 604. Each depth value represents the location 614 of a point visible from the light viewpoint along the ray originating at light viewpoint 612 and passing through a sample point 610. In one embodiment, these depth values are stored in Light-View Depth Array 906, which is indexed just like an ordinary eye-view image.
Finally, a regular Z-buffer image is rendered from the eye view, and shading is performed by using the previously-computed light-view depth values (stored in 906) to indicate whether or not each visible point is in shadow. A point 606 is in shadow if the distance from the light viewpoint 612 to the point visible from light viewpoint 614 is less than the distance from the light viewpoint 612 to the point visible from eye viewpoint 606. Otherwise, the point 606 is not in shadow.
The present invention uses a new method and system of computing intersections between rays and surfaces.
Although this problem is conceptually simple, solving this problem is computationally expensive when the number of Rays 501 and Surface Elements 511 is large. In interactive computer graphics systems, it is common to have one million rays and several million surface elements. The complete set of intersections may be found 30 or more times each second.
Therefore it is common to place specific constraints on the permissible rays to facilitate efficient solutions to the problem. The constraints are most easily expressed in terms of restrictions on the lines that are formed by extending each ray to infinity in both directions. In
To further facilitate an efficient solution to the problem, it is common to express the ray directions as 2D points 508 on a projection surface 502. The location of each 2D point 508 may be found by computing the intersection of the corresponding Ray 520 with Projection Surface 502. More precisely, the system computes the intersection of Line 514 with Projection Surface 502. In one embodiment, the Projection Surface 502 is a plane, but other projection surfaces such as cylinders or cubes may be used.
In the conventional Z-buffer, the ray directions are restricted such that the 2D points 508 lie in a regular pattern on Projection Surface 502. The computer system of
Given this set of restrictions of the conventional Z-buffer, the computer system of
The method just described finds all intersections between Rays 501 and Surface Elements 511. However, there may be more than one such intersection point for each Ray 501. Often, the goal is to find the closest such intersection point to Center of Projection 500, since the closest intersection point is the one that is “visible” from the Center of Projection 500. As in the conventional Z-buffer, the system of
A major difference between the system of
The operation of the system of
In the intersection testing step, the Surface Elements 511 are processed one at a time (more precisely, a small number in parallel). For each surface element, the following sub-steps are performed: The Surface Element 512 is projected onto the image plane, yielding a 2D region 510. The system then finds all points within 2D spatial data structure 708 that fall within the 2D region 510. This process will be referred to as a 2D range query.
In one embodiment, the 2D range query is implemented by first determining which Physical Grid Cells 706 are overlapped by the 2D region 510. Then, the 2D point locations stored at the linked list associated with each such grid cell are examined one at a time, and each point location is tested to determine if it lies within the 2D region 510. This brief description simplifies and omits some details that are described in more depth in the next subsection.
Figures Showing how Ray/Surface Intersection Testing Works at a High Level
Figures Showing how Ray/Surface Intersection Testing Works in Detail
The overall operation of the present invention is described in conjunction with
During Irregular Rasterization Phase 1504, the CPU 1110 feeds geometric primitives describing the scene to the Graphics Processing Unit 1112, in the same manner that it does in systems based on the prior-art technique of shadow-map generation. This similarity in usage from the point of view of application programs is a major advantage of our technique as compared to the alternative techniques of ray tracing and shadow volumes.
Refer again to
The following subsections describe the components of Graphics Processing Unit 1112 in greater detail and their roles in each of the two key computational phases 1502 and 1504 of the irregular Z-buffer as it is used for irregular shadow mapping.
Each Primitive Processor 1200 can access data for all three vertices of the triangle that it is processing at any particular point in time. The primitive processor 1200 is used during the Irregular Rasterization Phase 1504.
Fragment Processor Overview
As compared to prior-art fragment processors, Fragment Processor 1206 has three major new capabilities. These are: the ability to generate an arbitrary number of output fragments from each input fragment, the ability to specify the frame buffer address to be written for each output fragment (e.g., pixel x/y), subject to certain constraints, and the ability to perform “min( )” and “max( )” associative reduction operations. Additionally, the fragment processor has access to the homogeneous equations that are computed by the primitive processor. These equations describe the edges and Z interpolation of the fragment's triangle.
Each fragment processor 1206 is multithreaded with zero cycle context switches and round-robin scheduling, allowing the processor to remain active even when most threads are waiting on cache misses. In an illustrative embodiment, there are 16 threads on each processor. Each processor 1206 may be simple—it issues one scalar or one 4-wide vector operation each cycle.
Fragment Processor Operation during Construction Phase 1502
Once the Eye-View Image Pixel 608 has been assigned to a Fragment Processor 1206, the Fragment Processor 1206 begins the work described in
The values of Pgrid_width, Pgrid_height, tile_height, and tile_width are not critical for correctness and thus may be tuned to optimize performance for particular hardware and scene configurations. In one embodiment, Pgrid_width is 512, Pgrid_height is 512, Tile_width is 8 and Tile_height is 4.
Next, in Step 1607, the fragment processor updates registers that track four reduction-operation values—the minimum and maximum of logical grid x coordinate value and the minimum and maximum of logical grid y coordinate value. The values stored in these registers persist across fragments, and are retrieved from each Fragment Processor 1206 by CPU 1110 at the completion of Construction Phase 1502. At the completion of Construction Phase 1502 the CPU also performs a final reduction to obtain the overall min and max values, which the CPU uses at the start of Irregular Rasterization Phase 1504 to configure the Rasterizer and Stencil Test Unit 1204 for view port clipping.
Returning to the behavior in Fragment Processor 1206 during Construction Phase 1502,
Raster Operation Unit Overview
The ROP unit in the present invention is more flexible than the ROP in prior art graphics processors. In particular, the ROP may generate writes to memory addresses other than the fragment's original frame buffer address.
The ROP may use a pixel-cache organization (see “A Configurable Pixel Cache for Fast Image Generation”, Gorris et al. IEEE Computer Graphics & Applications, 1987). If desired, this second write address can be computed within the ROP from data read from the original frame buffer address. Generating new writes from previous data is used in the data structure construction phase of the irregular Z-buffer algorithm.
Next details of the ROP architecture are described. The Atomicity Enforcer 1422 insures that the ROP has no more than one transaction in progress for any particular pixel. This atomicity enforcer is similar to that described a prior-art patent (U.S. Pat. No. 6,734,861, System, method and article of manufacture for an interlock module in a computer graphics processing pipeline, Van Dyke et al.) The atomicity enforcer 1422 maintains a table indexed by a hash of the pixel address, indicating whether or not a fragment is in flight for a given hash. If a fragment is in flight, further fragments for that pixel are stalled (in order) in a queue until the offending fragment clears. Meanwhile, fragments for other pixels may continue to pass through the atomicity enforcer 1422 from the fragment network 1434. The Release Signal 1432 from a compute unit 1430 signals to the atomicity enforcer 1422 when all memory transactions related to a given pixel have completed, at which point it is safe to begin processing another fragment for the same pixel.
For each fragment, the Merge Buffer 1428 sends an address to the Left Pixel Cache 1424 and if needed to the Right Pixel Cache 1426. The two caches store data from disjoint address sets. For example, the left pixel cache can store color data while the right pixel cache stores Z values. Using two caches provides more bandwidth than a single cache, and allows cache behavior to be tuned separately for two data types. The Merge Buffer 1428 may contain an internal SRAM which holds fragments until they are ready to continue to the Compute Unit 1430. A fragment becomes ready to continue to the Compute Unit 1430 when all requested data has returned from the Left Pixel Cache 1424 and the Right Pixel Cache 1426. Thus, the purpose of the Merge Buffer 1428 is to merge the original incoming fragment data with the data read from the left and right pixel caches so that all needed data for the fragment is ready for the Compute Unit 1430.
The Compute Unit 1430 comprises a short pipeline that performs a set of simple configurable comparison and address-generation operations at a throughput of one fragment per cycle. It conditionally generates up to two pairs of data and addresses to be written, one pair each sent to the Left Pixel Cache 1424 and the Right Pixel Cache 1426. If the comparison within the Compute Unit 1430 fails, it may choose not to write any data to the caches. In this case, the Compute Unit 1430 immediately sends the fragment address over Release Signal 1432. If any data is written to the cache(s), the Compute Unit 1430 holds the fragment address in an internal buffer until it receives acknowledgement signals from all caches to which data was written. Once all such acknowledgement signals have been received, the compute unit sends the fragment address over Release Signal 1432 and removes the fragment from its internal buffer.
The Left Pixel Cache 1424 and Right Pixel Cache 1426 are both connected to Memory Network 1436 (or network 1212 of
Raster Operation Unit Operation during Construction Phase 1502
Raster and Stencil Test Unit Operation during Irregular Rasterization Phase 1504
Refer again to
Fragment Processor Operation during Irregular Rasterization Phase 1504
Once the fragment has been assigned to a Fragment Processor 1206, the Fragment Processor 1206 begins the process described in
It is useful to explain several steps in
The purpose of Step 1906 is to read a single node from the linked list. In this step, the 2D index (X,Y) into the Node Array 710 is used as a 2D texture coordinate. The fragment processor executes a texture-read operation to read the specified array element from the texture corresponding to Node Array 710. The data read from the texture includes the precise light-view image-plan coordinates of the sample, as (x,y). The data read from the texture also includes the 2D array index (X,Y) of the next node in the linked list. This index may be a NULL value, indicating that the 2D index is not valid and that the end of the linked list has been reached. The sample location (x,y) obtained from the linked list note is tested against all three edges of the triangle, using for example the Olano-Greer edge coefficients (step 1908). The process ends if the sample location is not inside all three edge equations (step 1910).
Otherwise the depth of the triangle fragment is computed (step 1912). One framebuffer fragment is emitted (step 1914). The framebuffer address of the fragment is set to (X,Y) which the Raster Operation Unit uses as an index into the Light-View Depth Array. The depth is set using the depth from step 1912. The active node index is set for the next field from the current linked list node (step 1916).
Raster Operation Unit Operation During Irregular Rasterization Phase 1504
In the Irregular Rasterization Phase 1504, the operation of the Raster Operation Unit 1210 is straightforward, since the unit performs a standard read/compare/conditional-write operation of the kind used for a conventional Z-buffer test. The read and write are made to Light-View Depth Array 906. Note that this array is a 2D array in memory, with the elements indexed by the eye-view (i,j) corresponding to the light-view sample point. The address for the read and write comes from the incoming fragment, and was originally determined by Fragment Processor 1206 as described earlier. The Merge Buffer 1428 issues the read, and the Compute Unit 1430 performs the comparison with the fragment's Z and conditionally performs the write. In this phase, the ROP uses both Left Pixel Cache 1424 and Right Pixel Cache 1426. Addresses are interleaved between the two caches, which provides higher performance than the more straightforward approach of using one of the caches.
To improve the spatial and temporal reuse of cached data as well as to avoid cache coherence issues, pixel addresses are statically mapped to ROPs. Fragment Network 1208 routes fragments to the ROP that “owns” the address to which they are being written. The fragment network is an m×n network capable of buffering input from up to m fragment processors and routing output to up to n ROP units each cycle. This routing network is not present in prior art graphics processors, but is used by the dynamic nature of the data structures used in the system of
Memory Layout of Data Structures
To improve cache performance, the data structures shown in
Interface to Application Program
The graphics processing unit 1112 that as been described is capable of being configured such that race conditions and non-deterministic behavior occur. Such behavior does not occur in the final output when the architecture performs the operations described earlier in this specification, but it may occur in the intermediate steps, and it could occur in the final output if the architecture is configured in a different manner. In a graphics processor architecture it is useful to prohibit such configurations, to ensure compatibility across a product line as well as to ensure frame-to-frame consistency.
Thus, in one embodiment, the potentially non-deterministic capabilities of the graphics processing unit are not exposed directly. These potentially non-deterministic capabilities are the low-level ROP configuration and the ability of the fragment processor to specify the frame buffer address of output fragments. Instead, the construction phase and irregular rasterization phase are exposed as high-level capabilities via extensions to the software API (specifically OpenGL and DirectX). These high level capabilities block the application program from directly accessing the grid of linked lists data structure. Instead, this data structure is accessed via an opaque handle. The API allows this data structure to be created from an eye-view image via one API call, and allows this data structure to be used for irregular rasterization via a second API call.
The embodiment described explicitly stores sample locations in a two-dimensional spatial data structure rather than implicitly representing them with a regular pattern. In this embodiment, the data structure is a grid of linked lists. However, the data structure may be any spatial data structure that supports efficient range queries, such as a k-d tree, BSP tree, quad tree or a simple grid.
In another embodiment, the data structure may be a data structure that places some restrictions on the configuration of sample locations that can be stored. In one specific alternative embodiment, the data structure is a grid of fixed-length lists or a grid of fixed-length arrays. During the construction phase, samples are discarded if they cannot be inserted into the data structure. In a second specific alternative embodiment, the grid data structure stores only one point per grid cell. This restricted approach still allows the location within the grid cell of this single sample point to be specified. Other alternative embodiments along these same lines will be obvious to one skilled in the art.
Store Data Structures in on-Chip Memory
An advantage of the present invention is that it produces artifact-free results with a strict bound on the amount of memory used for the data structures. This advantage permits an alternative embodiment in which some or all of the data structures described in
Line Segments Rather than Rays
Instead of intersecting rays with surface elements, line segments may be used to intersect with surface elements. Intersections beyond a certain distance along the ray may be ignored, or each ray may have an associated maximum distance stored with it in the data structure, with intersections beyond this distance ignored. As used herein, the term “ray” should be understood to include this case of line segments as well as similar modifications that would be obvious to one skilled in the art.
More specifically, an alternative embodiment for shadow computation is described. In
Points on Projection Surface Specified Directly
In one embodiment described above, the points on the projection surface are computed by computing the intersection between the rays and the projection surface. In an alternative embodiment, the points on the projection surface are directly specified by the application program or software device driver. The application program or device driver may also specify the projection surface and the center of projection. In this embodiment, the points on the projection surface implicitly represent the rays. This alternative embodiment may be desirable when points are a more natural representation than rays for the application program or software device driver.
The center of projection 500 may be conceptually located at infinity, yielding an orthographic projection. The details of applying the system and method to an orthographic projection will be obvious to one skilled in the art.
Surface Elements Representing Boundaries of Volume Elements
The present invention may be used to compute intersections between rays and volume elements. In this alternative embodiment, the Surface Elements 511 represent boundaries of the volume elements.
Maintaining Fragment Order
Real-time rendering API's define precise ordering semantics for non-commutative operations such as blending and Z-buffered color writes. In some cases these semantics are directly useful to application programmers, but they may also guard against non-determinism from frame to frame or from one hardware generation to another.
Fortunately, order does not matter when using the irregular Z-buffer to generate shadow or other Z-only maps. In the construction phase of irregular shadow mapping the ordering of nodes in each linked list is unimportant and opaque to the user if the data structure is hidden behind a high-level API as explained previously. In the rasterization phase, fragment order is not visible to the user because the Z comparison and update operation is commutative so long as it does not carry auxiliary information with it, such as color.
However, the irregular Z-buffer can be used for different applications in which color is carried with Z (e.g. reflection map generation from reflective objects near a surface). For these applications, the preservation of fragment order during rasterization may matter.
Because the system routes fragments from the Raster and Stencil Test Unit to Fragment Processing Units based on the fragment's logical grid cell, the system ensures that the fragments generated by a fragment program will belong to a particular logical grid cell. Thus, in this alternative embodiment global fragment order is be maintained automatically by maintaining fragment order within each fragment processor. In more detail, this is accomplished by ensuring that fragments are retired from the fragment processor in the same order that they enter the fragment processor.
Other Graphics Processing Unit Architectures
One feature of the Graphics Processing Unit is that it may utilize a highly-parallel internal architecture to achieve high performance. Other parallel architectures may be used as a Graphics Processing Unit. One such alternative embodiment is described in The Irregular Z-Buffer and its Application to Shadow Mapping, Gregory S. Johnson, William R. Mark, and Christopher A. Burns,
The University of Texas at Austin, Department of Computer Sciences Technical Report TR-04-09, Apr. 15, 2004, incorporated herein by reference. Another such alternative embodiment uses a chip-multi-processor architecture, such as the Sun Microsystem's Niagara architecture.
SIMD Fragment Processor Architecture
One feature of the Graphics Processing Unit is that it may utilize a Multiple Instruction Multiple Data architecture. With adjustments to the operations illustrated in
Construction Phase Performed on CPU
In another alternative embodiment, the construction phase of the technique is executed by a processor on a different semiconductor chip than the Graphics Processor. More specifically, the construction phase is performed by the CPU.
Removal of any-to-any Fragment-Network Routing in Irregular Rasterization Phase
In an alternative embodiment, the fragment network performing any-to-any routing during the irregular rasterization phase is eliminated. This is done by dynamically allocating storage for linked list nodes at new addresses, instead of storing linked list nodes at their eye-space index. The memory allocated to each node is chosen such that the node is stored in memory “owned” by the ROP just below the fragment processor which “owns” the node's logical grid cell. Thus, during the irregular rasterization phase, fragments always pass directly from a fragment processor to the ROP below that fragment processor. Memory allocation is performed separately for the region of memory “owned” by each ROP. Memory allocation may be performed by incrementing one or more counters associated with each ROP.
Because one embodiment is described for shadow computations, some of the steps in the this embodiment combine operations specific to shadow computation with operations that would be used for any type of visibility computation that benefits from an irregular image-plane sample pattern such as that shown in
Step 1502 is described in more detail in
Other uses for the system and method of intersection testing include generation of reflection maps, adaptive sampling (e.g. for anti-aliasing), and limited forms of ray tracing as described in “The Path-Buffer”—Cornell University Technical Report PCG-95-4, Bruce Walter and Jed Lengyel.
In the foregoing description, various methods and apparatus, and specific embodiments are described. However, it should be obvious to one conversant in the art, various alternatives, modifications, and changes may be possible without departing from the spirit and the scope of the invention which is defined by the metes and bounds of the appended claims.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US4943938 *||Oct 14, 1986||Jul 24, 1990||Hitachi, Ltd.||System for displaying shaded image of three-dimensional object|
|US5083287 *||Jul 14, 1989||Jan 21, 1992||Daikin Industries, Inc.||Method and apparatus for applying a shadowing operation to figures to be drawn for displaying on crt-display|
|US5239624 *||Apr 17, 1991||Aug 24, 1993||Pixar||Pseudo-random point sampling techniques in computer graphics|
|US5276532 *||Nov 26, 1991||Jan 4, 1994||Xerox Corporation||Split-level frame buffer|
|US5377313 *||Jan 29, 1992||Dec 27, 1994||International Business Machines Corporation||Computer graphics display method and system with shadow generation|
|US5583975 *||Jan 24, 1994||Dec 10, 1996||Matsushita Electric Industrial Co., Ltd.||Image generating apparatus and method of generating an image by parallel processing thread segments|
|US5742749 *||Feb 20, 1996||Apr 21, 1998||Silicon Graphics, Inc.||Method and apparatus for shadow generation through depth mapping|
|US5808620 *||Sep 12, 1995||Sep 15, 1998||Ibm Corporation||System and method for displaying shadows by dividing surfaces by occlusion into umbra penumbra, and illuminated regions from discontinuity edges and generating mesh|
|US5828378 *||May 31, 1996||Oct 27, 1998||Ricoh Company, Ltd.||Three dimensional graphics processing apparatus processing ordinary and special objects|
|US6018350 *||Oct 29, 1996||Jan 25, 2000||Real 3D, Inc.||Illumination and shadow simulation in a computer graphics/imaging system|
|US6067097 *||Apr 27, 1998||May 23, 2000||Fuji Xerox Co., Ltd.||Drawing processing apparatus|
|US6476805 *||Dec 23, 1999||Nov 5, 2002||Microsoft Corporation||Techniques for spatial displacement estimation and multi-resolution operations on light fields|
|US6489955 *||Jun 7, 1999||Dec 3, 2002||Intel Corporation||Ray intersection reduction using directionally classified target lists|
|US6597359 *||May 17, 2000||Jul 22, 2003||Raychip, Inc.||Hierarchical space subdivision hardware for ray tracing|
|US6639597 *||Feb 28, 2000||Oct 28, 2003||Mitsubishi Electric Research Laboratories Inc||Visibility splatting and image reconstruction for surface elements|
|US6677946 *||Mar 1, 2000||Jan 13, 2004||Sony Computer Entertainment Inc.||Method of, an apparatus for, and a recording medium comprising a program for, processing an image|
|US6680735 *||Nov 17, 2000||Jan 20, 2004||Terarecon, Inc.||Method for correcting gradients of irregular spaced graphic data|
|US6734861||Oct 16, 2000||May 11, 2004||Nvidia Corporation||System, method and article of manufacture for an interlock module in a computer graphics processing pipeline|
|US6741247 *||Nov 8, 1999||May 25, 2004||Imagination Technologies Limited||Shading 3-dimensional computer generated images|
|US6760024 *||Jul 19, 2000||Jul 6, 2004||Pixar||Method and apparatus for rendering shadows|
|US6798410 *||Nov 8, 1999||Sep 28, 2004||Imagination Technologies Limited||Shading 3-dimensional computer generated images|
|US6876362 *||Jul 10, 2002||Apr 5, 2005||Nvidia Corporation||Omnidirectional shadow texture mapping|
|US6906715 *||Nov 8, 1999||Jun 14, 2005||Imagination Technologies Limited||Shading and texturing 3-dimensional computer generated images|
|US7023438 *||Oct 14, 2003||Apr 4, 2006||Pixar||Method and apparatus for rendering shadows|
|US7034825 *||Aug 24, 2001||Apr 25, 2006||Stowe Jason A||Computerized image system|
|US7046244 *||Jun 7, 2002||May 16, 2006||Mental Images. Gmbh & Co, Kg.||System and method for rendering images using a strictly-deterministic methodology including recursive rotations for generating sample points|
|US7050054 *||May 16, 2001||May 23, 2006||Ngrain (Canada) Corporation||Method, apparatus, signals and codes for establishing and using a data structure for storing voxel information|
|US7126605 *||Oct 7, 2003||Oct 24, 2006||Munshi Aaftab A||Method and apparatus for implementing level of detail with ray tracing|
|US7133041 *||Feb 26, 2001||Nov 7, 2006||The Research Foundation Of State University Of New York||Apparatus and method for volume processing and rendering|
|US7136081 *||May 25, 2001||Nov 14, 2006||Nvidia Corporation||System and method of line sampling object scene information|
|US7170510 *||Nov 14, 2003||Jan 30, 2007||Sun Microsystems, Inc.||Method and apparatus for indicating a usage context of a computational resource through visual effects|
|US20020050990 *||Oct 9, 2001||May 2, 2002||Henry Sowizral||Visible-object determination for interactive visualization|
|US20020171644 *||Mar 31, 2001||Nov 21, 2002||Reshetov Alexander V.||Spatial patches for graphics rendering|
|US20030011618 *||Sep 13, 2002||Jan 16, 2003||Sun Microsystems, Inc.||Graphics system with a programmable sample position memory|
|US20030016218 *||Apr 26, 2001||Jan 23, 2003||Mitsubishi Electric Research Laboratories, Inc.||Rendering discrete sample points projected to a screen space with a continuous resampling filter|
|US20030156112 *||May 16, 2001||Aug 21, 2003||Halmshaw Paul A||Method, apparatus, signals and codes for establishing and using a data structure for storing voxel information|
|US20030179203 *||Feb 10, 2003||Sep 25, 2003||Sony Electronics, Inc.||System and process for digital generation, placement, animation and display of feathers and other surface-attached geometry for computer generated imagery|
|US20030227457 *||Jun 6, 2002||Dec 11, 2003||Pharr Matthew Milton||System and method of using multiple representations per object in computer graphics|
|US20040001062 *||Jun 26, 2002||Jan 1, 2004||Pharr Matthew Milton||System and method of improved calculation of diffusely reflected light|
|US20040100466 *||Nov 18, 2003||May 27, 2004||Deering Michael F.||Graphics system having a variable density super-sampled sample buffer|
|US20040104915 *||Nov 19, 2003||Jun 3, 2004||Kenichi Mori||Graphic computing apparatus|
|US20040174360 *||Mar 3, 2003||Sep 9, 2004||Deering Michael F.||System and method for computing filtered shadow estimates using reduced bandwidth|
|US20040174376 *||Mar 3, 2003||Sep 9, 2004||Deering Michael F.||Support of multi-layer transparency|
|US20040174378 *||Mar 3, 2003||Sep 9, 2004||Deering Michael F.||Automatic gain control, brightness compression, and super-intensity samples|
|US20040257365 *||Mar 26, 2004||Dec 23, 2004||Stmicroelectronics Limited||Computer graphics|
|US20050134588 *||Dec 22, 2003||Jun 23, 2005||Hybrid Graphics, Ltd.||Method and apparatus for image processing|
|US20050179686 *||Jan 11, 2005||Aug 18, 2005||Pixar||Flexible and modified multiresolution geometry caching based on ray differentials|
|US20050264568 *||Nov 19, 2002||Dec 1, 2005||Alexander Keller||Computer graphic system and computer-implemented method for generating images using a ray tracing methodology that makes use of a ray tree generated using low-discrepancy sequences and ray tracer for use therewith|
|US20060112115 *||Dec 30, 2005||May 25, 2006||Microsoft Corporation||Data structure for efficient access to variable-size data objects|
|US20060197780 *||Jun 7, 2004||Sep 7, 2006||Koninklijke Philips Electronics, N.V.||User control of 3d volume plane crop|
|US20070040832 *||Jul 30, 2004||Feb 22, 2007||Tan Tiow S||Trapezoidal shadow maps|
|1||Aila, T. et al. (Jun. 21, 2004) "Alias Free Shadow Maps", Proceedings of Eurographics Symposium on Rendering 2004.|
|2||*||Aykanat, Cevdet; Isler, Veysi; Ozguc, Bulent; "An Efficient Parallel Spatial Subdivision Algorithm for Object-Based Parallel Ray Tracing;" 1994; First Bilkent Computer Graphics Conference; pp. 1-8.|
|3||*||De Floriani, Leila; "Surface Representations Based on Triangular Grids;" Feb. 1987; The Visual Computer; vol. 3, No. 1, pp. 27-50.|
|4||Derring, M. et al(2002) "The SAGE" Graphics Architecture, In SIGGRAPH 2002: Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, ACM Press, New York, NY USA, pp. 683-692.|
|5||*||E. A. Haines and D. P. Greenberg; "The Light Buffer: A Shadow Testing Accelerator;" Sep. 1986; IEEE Computer Graphics and Applications; vol. 6, No. 9; pp. 6-16.|
|6||Fernando, R. et al., (2001) "Adaptive Shadow Maps", In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques (ACM SIGGRAPH 2001) ACM Press, pp. 387-390.|
|7||*||Foley, James D.; van Dam, Andries; Feiner, Steven k.; Hughes, John F.; "Computer Graphics Principles and Practice;" 1996, Addison-Wesley Publishing Company; Second Edition; p. 788.|
|8||*||Foley, James D.; van Dam, Andries; Feiner, Steven k.; Hughes, John F.; "Computer Graphics Principles and Practice;" 1996, Addison-Wesley Publishing Company; Second Edition; pp. 701-705, 721-727, 758-764 and 792-795.|
|9||*||Fussel, Donald; Subramanian, K. R.; "Fast Ray Tracing Using K-D Trees;" Mar. 1988; University of Texas at Austin Department of Computer Science; pp. 1-21.|
|10||Johnson, G. et al., "The Irregular Z-Buffer: Hardware Acceleration for Irregular Data Structures", ACM Transactions on Graphics, (Accepted to appear Oct. 2005 or Jan. 2006, but already published in draft form in the web).|
|11||Johnson, G. et al., (Apr. 15, 2004), "The Irregular Z-Buffer and its Application to Shadow Mapping", The University of Texas at Austin, Department of Computer Sciences, Technical Report TR-04-09.|
|12||*||Kilgard, Mark J.; "OpenGL Shadow Mapping with Today's OpenGL Hardware;" Game Developers Conference 2001, pp. 1-106.|
|13||Sen, P. et al., (2003), "Shadow Silhouette Maps", ACM Transactions on Graphics (TOG) 22:3, pp. 521-526.|
|14||Whitted, T. (1980) "An Improved Illumination Model for Shaded Display", Communications of the ACM 23 (Jun.), pp. 343-349.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US8102391 *||Apr 11, 2008||Jan 24, 2012||International Business Machines Corporation||Hybrid rendering of image data utilizing streaming geometry frontend interconnected to physical rendering backend through dynamic accelerated data structure generator|
|US8432396 *||Jun 8, 2007||Apr 30, 2013||Apple Inc.||Reflections in a multidimensional user interface environment|
|US8730264||Sep 26, 2011||May 20, 2014||Google Inc.||Determining when image elements intersect|
|US8928675||Feb 13, 2014||Jan 6, 2015||Raycast Systems, Inc.||Computer hardware architecture and data structures for encoders to support incoherent ray traversal|
|US8947447||Feb 13, 2014||Feb 3, 2015||Raycast Systems, Inc.||Computer hardware architecture and data structures for ray binning to support incoherent ray traversal|
|US8952963||Feb 13, 2014||Feb 10, 2015||Raycast Systems, Inc.||Computer hardware architecture and data structures for a grid traversal unit to support incoherent ray traversal|
|US9035946 *||Feb 13, 2014||May 19, 2015||Raycast Systems, Inc.||Computer hardware architecture and data structures for triangle binning to support incoherent ray traversal|
|US9058691||Jun 24, 2014||Jun 16, 2015||Raycast Systems, Inc.||Computer hardware architecture and data structures for a ray traversal unit to support incoherent ray traversal|
|US9087394||Feb 13, 2014||Jul 21, 2015||Raycast Systems, Inc.||Computer hardware architecture and data structures for packet binning to support incoherent ray traversal|
|US9349214 *||Aug 20, 2008||May 24, 2016||Take-Two Interactive Software, Inc.||Systems and methods for reproduction of shadows from multiple incident light sources|
|US9483864 *||Dec 5, 2008||Nov 1, 2016||International Business Machines Corporation||System and method for photorealistic imaging using ambient occlusion|
|US9619923||Nov 25, 2014||Apr 11, 2017||Raycast Systems, Inc.||Computer hardware architecture and data structures for encoders to support incoherent ray traversal|
|US20080307366 *||Jun 8, 2007||Dec 11, 2008||Apple, Inc.||Reflections in a multidimensional user interface environment|
|US20090256836 *||Apr 11, 2008||Oct 15, 2009||Dave Fowler||Hybrid rendering of image data utilizing streaming geometry frontend interconnected to physical rendering backend through dynamic accelerated data structure generator|
|US20100045675 *||Aug 20, 2008||Feb 25, 2010||Take Two Interactive Software, Inc.||Systems and methods for reproduction of shadows from multiple incident light sources|
|US20100141652 *||Dec 5, 2008||Jun 10, 2010||International Business Machines||System and Method for Photorealistic Imaging Using Ambient Occlusion|
|US20100277474 *||Dec 26, 2008||Nov 4, 2010||Akihiro Ishihara||Image processing device, image processing method, information recording medium, and program|
|US20150123972 *||Jan 6, 2015||May 7, 2015||Landmark Graphics Corporation||Systems and Methods for Rendering 2D Grid Data|
|EP2559006A4 *||Apr 7, 2011||Oct 28, 2015||Fortem Solutions Inc||Camera projection meshes|
|WO2016067116A1 *||Feb 9, 2015||May 6, 2016||Yandex Europe Ag||Method and electronic device for determining whether a point lies within a polygon in a multidimensional space|
|International Classification||G06T15/40, G06T15/06|
|Cooperative Classification||G06T15/405, G06T15/06, G06T15/60, G06T11/40|
|European Classification||G06T11/40, G06T15/40A, G06T15/60, G06T15/06|
|Dec 17, 2012||FPAY||Fee payment|
Year of fee payment: 4
|Jan 27, 2017||FPAY||Fee payment|
Year of fee payment: 8