US RE42534 E1 Abstract The present invention provides a graphics processing unit for rendering objects from a software application executing on a processing unit in which the objects to be rendered are received as control points of bicubic surfaces. According to the method and system disclosed herein, the graphics processing unit includes a transform unit, a lighting unit, a renderer unit, and a tessellate unit for tessellating both rational and non-rational object surfaces in real-time.
Claims(20) 1. A system, comprising:
a processor; and
a graphics processing unit (GPU) coupled to the processor, the GPU comprising a transform unit, a lighting unit, a renderer unit, and a tessellate unit coupled between the transform unit and the lighting unit;
wherein objects to be rendered by the GPU are transmitted as control points to the GPU, the transform unit transforms the control points, the tessellate unit executes a first set of instructions for tessellating both rational and non-rational object surfaces expressed in screen coordinates (SC), in real-time, the lighting unit lights vertices of the triangles resultant from tessellation, and the renderer unit renders and displays the triangles by executing a second set of instructions.
2. The graphics system of
for each bicubic surface,
subdividing a boundary curve representing an s interval until a projection of a length of a height of a curve bounding box is below a certain predetermined number of pixels as measured in screen coordinates; and
subdividing the boundary curve representing a t interval until a projection of a length of a height of the curve bounding box is below a certain predetermined number of pixels as measured in screen coordinates.
3. The graphics system of
4. The graphics system of
for all bicubic surfaces sharing a same s or t parameter boundary,
choosing as a common subdivision a reunion of the subdivisions in order to prevent cracks showing along the common boundary or a finest subdivision, the finest subdivision being the one with the most points inside the set.
5. The graphics system of
for each bicubic surface,
for each pair (si,tj) of parameters, where i and j represent a number of rows and columns, respectively,
calculating texture coordinates ((u
_{i,j }v_{i,j }q_{i,j}) and displacement coordinates (p_{i,j }r_{i,j}) for vertex V_{i,j}) thru interpolation,looking up vertex displacement (dx
_{i,j}, dy_{i,j}, dz_{i,j}) corresponding to the displacement coordinates (p_{i,j }r_{i,j}); andgenerating triangles by connecting neighboring vertices.
6. The graphics system of
for each vertex V
_{i,j},
calculating a normal N
_{i,j }to that vertex, which was previously transformed in world coordinatescalculating (dN
_{i,j}) as normal displacement for bump mapping as a function of (si,tj)calculating N′
_{i,j}=N_{i,j}+dN_{i,j }to displace the normal for bump mapping; andcalculating V′
_{i,j}=V_{i,j}+(dx_{i,j}, dy_{i,j}, dz_{i,j})*N_{i,j }to displace the vertiex for displacement mapping;for each triangle,
executing bump and displacement mapping pixel-by-pixel for all the points inside the triangle; and
calculating a normal to the triangle for culling.
7. The graphics system of
8. The graphics system of
9. A real-time method for tessellating and rendering surfaces of an object on a computer system, comprising:
(a) performing transformation and tessellation by,
(i) for each surface, transforming 16 points;
(ii) performing three dimensional surface subdivision using the computer system by subdividing only two cubic curves comprising the surface;
(iii) terminating the subdivision termination by expressing the subdivision in screen coordinates (SC) and by measuring curvature in pixels;
(iv) for each new view, generating a new subdivision, thereby producing automatic level of detail;
(v) preventing cracks at boundaries between adjacent surfaces by using a common subdivision for all surfaces sharing a boundary;
(vi) for the current subdivision, generating the vertices, normals, texture coordinates, and displacements used for bump and displacement mapping; and
(vii) generating triangles by connecting neighboring vertices;
(viii) for each vertex, calculating the normal, calculating normal displacement for bump mapping, displacing the normal for bump mapping, displacing the vertex for displacement mapping, wherein bump and displacement mapping are executed pixel by pixel for all the points inside each triangle; and
(ix) calculating the normal of each triangle; and
(b) performing rendering by
(i) for each triangle, clipping against a viewing viewport, calculating lighting for additional vertices produced by clipping, and culling backfacing triangles;
(ii) projecting all vertices into screen coordinates; and
(iii) rendering all the triangles produced after clipping and projection.
10. A system comprising:
a central processing unit; a bus operatively connected to said central processing unit; and a graphics processing unit operatively connected to said bus; wherein the central processing unit transmits graphic objects to said graphics processing unit via said bus; and wherein said graphics processing unit comprises a transform unit that transforms the graphic objects into transformed objects; a tessellation unit for tessellating the transformed objects, wherein said tessellation unit is operatively coupled between said transform unit and a lighting unit; and said lighting unit comprising means for lighting triangles resulting from said tessellation unit. 11. The system of claim 10 wherein the tessellation unit tessellates the transformed objects into a plurality of triangle vertices.
12. The system of claim 10 wherein the graphic objects have spatial coordinates and said transform unit transforms the spatial coordinates of said graphic objects.
13. The system of claim 10 further comprising a lighting unit operatively coupled to the tessellation unit for lighting the tessellated transformed objects.
14. The system of claim 13 further comprising a rendering unit operatively coupled to the lighting unit for rendering the lighted, tessellated, transformed objects.
15. A method comprising:
providing a tessellation unit coupled between a transform unit and a lighting unit; receiving graphic objects to be rendered by a graphics processing unit; transforming the graphic objects into transformed objects using said transform unit; tessellating the transformed objects using said tessellation unit; and lighting vertices of triangles resultant from said tessellating using said lighting unit. 16. The method of claim 15 wherein the graphic objects comprise control points of a bicubic surface.
17. The method of claim 16 wherein the control points comprise spatial coordinates and the transformation step comprises transforming the coordinates of the control points.
18. The method of claim 16 wherein the tessellation step comprises subdividing the surface into a number of triangles.
19. The method of claim 18 wherein the tessellation step further comprises terminating the subdivision when the curvature of a triangle is less than a predetermined amount.
20. The method of claim 19 wherein the degree of subdivision at meeting edges of two bicubic surfaces is equal.
Description The present invention is a continuation of U.S. application Ser. No. 10/732,398, now U.S. Pat. No. 7,245,299, entitled “Bicubic Surface Real-Time Tesselation Unit”, (1935CIP2) filed Dec. 9, 2003, issued on Jul. 17, 2007, which is a continuation-in-part of abandoned U.S. application Ser. No. 10/436,698, entitled “Bicubic Surface Rendering,” (1935CIP) filed on May 12, 2003, which is a continuation-in-part of Ser. No. 09/734,438 filed Dec. 11, 2000, now U.S. Pat. No. 6,563,501 entitled “Bicubic Surface Rendering,” issued May 13, 2003, which claims priority of provisional application No. 60/222,105, filed on Jul. 28, 2000, which are hereby incorporated by reference. The present invention relates to computer graphics, and more specifically to a method and apparatus for rendering bicubic surfaces in real-time on a computer system. Object models are often stored in computer systems in the form of surfaces. The process of displaying the object (corresponding to the object model) generally requires rendering, which usually refers to mapping the object model onto a two dimensional surface. At least when the surfaces are curved, the surfaces are generally subdivided or decomposed into triangles in the process of rendering the images. A cubic parametric curve is defined by the positions and tangents at the curve's end points. A Bezier curve, as shown in Cubic curves may be generalized to bicubic surfaces by defining cubic equations of two parameters, s and t. In other words, bicubic surfaces are defined as parametric surfaces where the (x,y,z) coordinates in a space called “world coordinates” (WC) of each point of the surface are functions of s and t, defined by a geometry matrix P comprising 16 control points ( While the parameters s and t describe a closed unidimensional interval (typically the interval [0,1]) the points (x,y,z) describe the surface: x=f(s,t), y=g(s,t), z=h(s,t) sε[0,1], t.ε[0,1], where ε represents an interval between the two coordinates in the parenthesis. The space determined by s and t, the bidimensional interval [0,1]×[0,1] is called “parameter coordinates” (PC). Textures described in a space called “texture coordinates” (TC) that can be two or even three dimensional are described by sets of points of two ((u,v)) or three coordinates ((u,v,q)). The process of attaching a texture to a surface is called “texture—object association” and consists of associating u, v and q with the parameters s and t via some function:
This process is executed off-line because the subdivision of the surfaces and the measurement of the resulting curvature are very time consuming. As shown in Furthermore, each vertex or triangle plane normal needs to be transformed when the surface is transformed in response to a change of view of the surface, a computationally intensive process that may need dedicated hardware. Also, there is no accounting for the fact that the surfaces are actually rendered in a space called “screen coordinates” (SC) after a process called “projection” which distorts such surfaces to the point where we need to take into consideration the curvature in SC, not in WC. The state of the art in today's hardware architecture for rendering relies overwhelmingly on triangle databases such as meshes, strips, fans. The current state of the art in the computer graphics industry is described in The object modeling in the application is executed on parametric surfaces such as nurbs, Bezier, splines, and the surfaces are subdivided or tessellated off-line and stored as triangle vertices in a triangle database by means of commercially available tools, such as the Alias suite. The triangle vertices are then transmitted from the CPU Unfortunately, the off-line tessellation produces a fixed triangulation that may exhibit an excessively large number of very small triangles when the object is far away. Triangle rendering in this case is dominated by the processing of vertices (transformation, lighting) and by the triangle setup (the calculation of the color and texture gradients). Since triangles may reduce to a pixel or less, it is obvious that this is an inefficient treatment. Conversely, when the object is very close to the viewer, the composing triangles may appear very large and the object looses its smoothness appearance, looking more like a polyhedron. The increase in the scene complexity has pushed up the number of triangles, which has pushed up the demands for higher bus bandwidth. For example, the bus With the advent of faster arithmetic it has become possible to change the current architecture such that the CPU In the early 90's Nvidia Corporation made an attempt to introduce a biquadric based hardware renderer. The attempt was not a technical and commercial success because biquadrics have an insufficient number of degrees of freedom, all the models use bicubics, none of the models uses biquadrics. More currently, Henry Moreton from Nvidia has resurrected the real-time tesselation unit described in the U.S. Pat. No. 6,597,356 entitled “Integrated Tesselator in a Graphics Processing Unit,” issued Jul. 22, 2003. Moreton's invention doesn't directly tesselate patches in real-time, but rather uses triangle meshes pre-tesselated off-line in conjunction with a proprietary stitching method that avoids cracking and popping at the seams between the triangle meshes representing surface patches. His tesselator unit outputs triangle databases to be rendered by the existing components of the 3D graphics hardware. Accordingly, what is needed is a system and method for performing tessellation in real-time. The present invention addresses such a need. The present invention provides a graphics processing unit for rendering objects from a software application executing on a processing unit in which the objects to be rendered are received as control points of bicubic surfaces. According to the method and system disclosed herein, the graphics processing unit includes a transform unit, a lighting unit, a renderer unit, and a tessellate unit for tessellating both rational and non-rational object surfaces in real-time. The present invention will be described with reference to the accompanying drawings, wherein: The present invention is directed to a method and apparatus for minimizing the number of computations required for the subdivision of bicubic surfaces into triangles for real-time tessellation. The following description is presented to enable one of ordinary skill in the art to make and use the invention and is provided in the context of a patent application and its requirements. Various modifications to the preferred embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. Thus, the present invention is not intended to be limited to the embodiment shown but is to be accorded the widest scope consistent with the principles and features described herein. Because prior art methods for performing surface subdivision are so slow and limited, a method is needed for rendering a curved surface that minimizes the number of required computations, such that the images can potentially be rendered in real-time (as opposed to off-line). U.S. Pat. No. 6,563,501, by the Applicant of the present application, provides an improved method and system for rendering bicubic surfaces of an object on a computer system. Each bicubic surface is defined by sixteen control points and bounded by four boundary curves, and each boundary curve is formed by boundary box of line segments formed between four of the control points. The method and system include transforming only the control points of the surface given a view of the object, rather than points across the entire bicubic surface. Next, a pair of orthogonal boundary curves to process is selected. After the boundary curves have been selected, each of the curves is iteratively subdivided, as shown in FIG. The method disclosed in the '501 patent minimizes the number of computations required for rendering of an object model by requiring that only two orthogonal curves of the surface be subdivided, as shown in The present invention utilizes the above method for minimizing the number of computations required for the subdivision of bicubic surfaces into triangles in order to provide an improved architecture for the computer graphics pipeline hardware. The improved architecture replaces triangle mesh transformation and rendering with a system that transforms bicubic patches and tesselates the patches in real-time. This process is executed in a real-time tesselation unit that replaces the conventional transformation unit present in the prior art hardware 3D architectures. According to the present invention, the reduction in computations is attained by reducing the subdivision to the subdivision on only two orthogonal curves. In addition, the criteria for sub-division may be determined in SC. The description is provided with reference to Bezier surfaces for illustration. Due to such features, the present invention may enable objects to be subdivided and rendered in real-time. The partition into triangles may also be adapted to the distance between the surface and the viewer resulting in an optimal number of triangles. As a result, the effect of automatic level of detail may be obtained, whereby the number of resulting triangles is inversely proportional with the distance between the surface and the viewer. The normals to the resulting tiles are also generated in real-time by using the cross product of the vectors that form the edges of the tiles. The texture coordinates associated with the vertices of the resulting triangles are computed in real-time by evaluating the functions: u=a(s,t) v=b(s,t). The whole process is directly influenced by the distance between viewer and object, the SC space plays a major role in the computations. The steps involved in the combined subdivision and rendering of bicubic surfaces in accordance with the present invention are described below in pseudo code. As will be appreciated by one of ordinary skill in the art, the text between the “/*” and “*/” symbols denote comments explaining the pseudo code. All steps are performed in real-time, and steps 0 through 4 are transformation and tessellation, while steps 5-7 are rendering. Step 0 /* For each surface transform only 16 points instead of transforming all the vertices inside the surface. There is no need to transform the normals to the vertices since they are generated at step 4*/. For each bicubic surface -
- Transform the 16 control points and the single normal that determine the surface
Step 1 /* Simplify the three dimensional surface subdivision by reducing it to the subdivision of two cubic curves */. For each bicubic surface Subdivide the boundary curve representing s interval until the projection of the length of the height of the -
- curve bounding box is below a certain predetermined number of pixels as measured in screen coordinates.
Subdivide the boundary curve representing t interval until the projection of the length of the height of the curve bounding box is below a certain predetermined number of pixels as measured in screen coordinates. /*Simplify the subdivision termination criteria by expressing it in screen coordinates (SC) and by measuring the curvature in pixels. For each new view, a new subdivision can be generated, producing automatic level of detail */. Step 2 For all bicubic surfaces sharing a same parameter (either s or t) boundary Choose as the common subdivision the reunion of the subdivisions in order to prevent cracks showing along the common boundary. —OR— Choose as the common subdivision the finest subdivision (the one with the most points inside the set) /* Prevent cracks at the boundary between adjacent surfaces by using a common subdivision for all surfaces sharing a boundary */ Step 3 /* Generate the vertices, normals, the texture coordinates, and the displacements used for bump and displacement mapping for the present subdivision */ For each bicubic surface For each pair (si,tj) of parameters /*All calculations employ some form of direct evaluation of the variables. Here, i and j represent a number of rows and columns, respectively */ Calculate (texture coordinates (u /*texture-, displacement map and vertex coordinates as a function of (si,tj)*/ Look up vertex displacement (dx Generate triangles by connecting neighboring vertices. Step 4 For each vertex V Calculate the normal N Calculate (dN N′ V′ /* bump and displacement mapping are executed in the renderer, pixel by pixel for all the points inside each triangle */ For each triangle Calculate the normal to the triangle /*used for culling */ Step 5 For each triangle Clip against the viewing viewport Calculate lighting for the additional vertices produced by clipping Cull backfacing triangles Step 6 Project all the vertices Vi,j into screen coordinates (SC) Step 7 Render all the triangles produced after clipping and projection Referring now to In operation, the CPU Referring again to U.S. Pat. No. 6,563,501, we use the described subdivision algorithm while applying our termination criterion. The geometric adaptive subdivision induces a corresponding parametric subdivision.
The geometry vectors of the resulting left and right cubic curves may be expressed as follows:
The edge subdivision results into a subdivision of the parametric intervals s {s x(s,t)=S*Mb*Px*Mb
For s=constant the matrix M=S*Mb*Pz*Mb In order to determine the vertex normals for each generated vertex V If bump mapping or displacement mapping are enabled we need to calculate additional data:
We calculate the texture coordinates through bilinear interpolation, as shown in The subdivision algorithm described in U.S. Pat. No. 6,563,501 applied to non rational surfaces. In a further embodiment of the present invention, the algorithm is extended to another class of surfaces, non uniform rational B-spline surfaces, or NURBS. Nurbs are a very important form of modeling 3-D objects in computer graphics. A non-uniform rational B-spline surface of degree (p, q) is defined by
Such a surface lies within a convex hull formed by its control points. To fix the idea, let's pick m=n=4. There are 16 control points, P Now consider any one of the curves:
Such a curve can be obtained by fixing one of the two parameters s or t in the surface description. For example s=variable, t=0 produces such a curve. Like in the case of Bezier surfaces, there are 8 such curves, 4 boundary ones and 4 internal ones. The subdivision of the surface reduces to the subdivision of the convex hull of the boundary curves or of the internal curves as described in the case of the Bezier surfaces. Referring to AND
According to a further aspect of the present invention, a more general criterion is provided:
AND
AND
AND
The above criterion is the most general criterion and it will work for any class of surface, both rational and non-rational. It will also workfordeformable surfaces. It will work for surfaces that are more curved along the boundary or more curved internally. Since the curvature of deformable surfaces can switch between being boundary-limited and internally-limited the flatness of both types of curves will need to be measured at the start of the tesselation associated with each instance of the surface. The pair of orthogonal curves used for tesselation can then be one of: both boundary, both internal, one boundary and one internal. Yet another embodiment, the subdivision termination criteria may be used for the control of the numerically controlled machines. The criterion described below is calculated in object coordinates. In the formulas described below “tol” represents the tolerance, expressed in units of measurement (typically micrometers) accepted for the processing of the surfaces of the machined parts: Maximum {distance (P Maximum {distance (P AND Maximum {distance (P Maximum {distance (P AND Maximum {distance (P Maximum {distance (P AND Maximum {distance (P Maximum {distance (P If there are no special prevention methods, cracks may appear at the boundary between abutting patches. This is mainly due to the fact that the patches are subdivided independently of each other. Abutting patches may and do exhibit different curvatures resulting into different subdivisions. For example, in One of the approaches disclosed herein exhibits identical straight edges for the two patches sharing the boundary. The other implementation exhibits even stronger continuity; the subpatches generated through subdivision form continuous strips orthogonal to the shared boundary. This is due to the fact that abutting patches are forced to have the same parametric subdivision. The present invention provides two different crack prevention methods, each employing a slightly different subdivision algorithm. 1. In order to avoid cracks between patches use a “zipper approach” to fix the triangle strips that result at the four borders of the surface. All four boundary curves for the patches situated at the edge of the object are used. See 2. In order to avoid cracks between patches, use a second pass that generates the reunion of the subdivisions for all the patches in a patch strip. All four boundary curves for the patches situated at the edge of the object are used. See In a preferred embodiment, in order to facilitate the design of drivers for the architecture shown in Below, the first three primitives are described. Referring to Referring to Referring to A further embodiment of the present invention provides a method for accelerating rendering. A well known technique used for accelerating rendering is backface culling, which a method which discards triangles that are facing away from the viewer. It is beneficial to extend this technique to cover backfacing surfaces. This way, we avoid the computational costs of tesselating surfaces that face away from the user. Our proposed method discards such surfaces as a whole, before even starting the tesselation computation. Referring to If ANY of the panels of the type {P An alternative criterion can be given as: If the bottom panel {P A method and system has been disclosed for performing tessellation in real-time in a GPU. Software written according to the present invention is to be stored in some form of computer-readable medium, such as memory or CD-ROM, or transmitted over a network, and executed by a processor. Although the present invention has been described in accordance with the embodiments shown, one of ordinary skill in the art will readily recognize that there could be variations to the embodiments and those variations would be within the spirit and scope of the present invention. Accordingly, many modifications may be made by one of ordinary skill in the art without departing from the spirit and scope of the appended claims. Patent Citations
Non-Patent Citations
Classifications
Rotate |