US 20040113911 A1 Abstract A method for shading polygon surfaces in a real time rendering system. Providing at least one polygon surface to be shaded. The polygon surface having a plurality of pixels and including at least one surface angle. Providing at least one point light source. Calculating using computer hardware, for substantially each drawn pixel of said polygon surface, a substantially normalized 3D surface direction vector and a 3D point light vector.
Claims(21) 1. A method for shading polygon surfaces in a real time rendering system comprising the steps of:
providing at least one polygon surface to be shaded, said polygon surface comprised of a plurality of pixels and including at least one surface angle; providing at least one point light source; and calculating using computer hardware, for substantially each drawn pixel of said polygon surface, a substantially normalized 3D surface direction vector and a 3D point light vector. 2. The method of 3. The method of 4. The method of 5. The method of 6. The method of claims 5 further comprising the step of calculating a specular light value using the point light vector and the reflection vector. 7. The method of 8. A method for shading polygon surfaces in a real time rendering system for drawing a plurality of drawn pixels comprising the steps of:
providing at least one polygon having a polygon surface to be shaded, said surface comprising a plurality of pixels, and said surface including at least one surface angle; providing at least one light source, said light source having a corresponding three-dimensional light source vector; providing a bump map, said bump map having corresponding bump map vectors; and using a processor to calculate, for substantially each drawn pixel of said surface, a substantially normalized three dimensional (3D) surface direction vector generated from said at least one surface angle and said bump map. 9. The method of 10. The method of 11. The method of 12. The method of 13. The method of 14. The method of 15. The method of 16. The method of 17. The method of 18. The method of 19. The method of 20. The method of 21. The method of Description [0001] The present invention relates to the field of per-pixel lighting in real-time three-dimensional (“3D”) computer graphics hardware and software. Primarily, most real-time computer graphics systems rely on per-vertex lighting schemes such as Gouraud shading. In this scheme, the curvature of a polygon surface is represented through different surface normal vectors at each polygon vertex. Lighting calculations are carried out for each vertex and the resultant color information is interpolated across the surface of the polygon. Lighting schemes such as Gouraud shading are generally utilized for their speed and simplicity of operation since they require far less calculation than more complex strategies. Per-pixel lighting, in contrast, is a lighting strategy in which separate lighting calculations for one or more light sources are carried out for each pixel of a drawn polygon. Most well-known per-pixel lighting strategies are variations on a basic vertex normal interpolation scheme, i.e., Phong shading. Vertex normal interpolation strategies interpolate the normal vectors given at each vertex throughout the polygon surface. For each pixel, the interpolated vertex normal is normalized to unit length and then used in per-pixel lighting calculations. Typically the per-pixel calculations involve taking the dot product of the normal vector and the light source vector to arrive at a light source brightness coefficient. While fast per-pixel dot product hardware is not infeasible with the speed and complexity of today's microprocessors, the calculations involved in normalizing the interpolated vertex vector (i.e., floating point square root and division) are prohibitive for practical real-time implementation at high speed. [0002] Another per-pixel lighting technique, commonly referred to as bump mapping, involves using a two-dimensional (“2D”) map to store surface height or orientation and using texel values from this map to perturb a (usually interpolated) surface normal vector. Calculation in traditional combinational bump mapping (i.e., where the bump map angle perturbation is combined with a potentially changing surface normal) mostly involves resolving the bump map perturbation to a 3D vector that is subsequently combined with the surface normal vector. Since the surface normal vector may change from pixel to pixel, an appropriate, usually orthogonal, orientation must be given to the bump map vector. This process usually requires additional normalization and a significant computational overhead, making combinational bump mapping approaches impractical for efficient real-time calculation. A well-known method of avoiding these calculations is to store a bump map as a collection of normalized 3D vectors, therefore avoiding the need for normalization and combination. While this strategy is more practical for real-time implementations, it has several drawbacks. Such a system is inflexible since bump maps may only be used for objects in preset orientations, and surface curvature must be represented within in the bump map rather than through vertex normals as in Phong shading and its equivalents. Furthermore, the accuracy of the image is limited by the granularity of the bump map, since values failing between adjacent texels are traditionally interpolated but not re-normalized. Another drawback of the above-mentioned bump mapping scheme is the size and inflexibility of the bump maps. Since the bump map texels contain 3D vectors, medium to large complexity maps will occupy a great deal of memory. Also, due to the specific nature of the bump maps, they are generally only usable on the surfaces for which they were designed; therefore such bump maps are not often used for multiple surfaces. [0003] A further aspect of per-pixel lighting is the calculation of intensity of specular reflections. Traditionally, the calculation of specular reflection involves the dot product of the light source vector and the view reflection vector (the view, or eye, vector reflected around the surface normal vector). Alternately, the same calculation can be made with the dot product of the view vector and the reflection of the light vector around the normal. In either of the alternatives, at least one vector must be reflected around a surface normal vector that potentially changes from pixel to pixel. The calculation required to obtain a reflected vector, while not as costly as bump map combination, is nonetheless significant. [0004] Yet another complication in per-pixel lighting is presented by the cases of point light sources and point view vectors. Point light sources involve a light vector that changes on a per-pixel basis. Traditionally, the difference vector between the surface point and the light source is calculated and normalized for each pixel, which is computationally undesirable for efficient calculation. Likewise, point view vectors involve a view vector that changes on a per-pixel basis. Utilizing point view vectors also requires the calculation and normalization of a difference vector on a per-pixel basis. [0005] The application of the aforementioned per-pixel lighting techniques provides visually enhanced, higher quality and more realistic images than today's real-time image generators are capable of producing. While techniques exist which can provide similar images, these techniques are difficult to implement and inflexible to use. Therefore, there exists a real need for a practical and efficient apparatus and method that provides vertex normal interpolation, combinational bump mapping, specular reflection calculation, and support for point lighting and point viewer within real-time 3D graphics systems. [0006] The present invention is directed to a method for shading polygon surfaces in a real time rendering system. The method includes the step of providing at least one polygon surface to be shaded. The polygon surface having a plurality of pixels and including at least one surface angle. The method also includes the step of providing at least one point light source. The method further includes the step of calculating using computer hardware, for substantially each drawn pixel of said polygon surface, a substantially normalized 3D surface direction vector and a 3D point light vector. [0007]FIG. 1 is a diagram illustrating the translation of normal vectors to a view coordinate system in accordance with a preferred embodiment of the invention. [0008]FIG. 2 is a diagram illustrating the conversion of a 3D vector into an angle-proportional 2D vector in accordance with a preferred embodiment of the invention. [0009]FIG. 3 is diagram illustrating the combination of a surface angle vector and a bump map vector to produce a composite surface angle vector in accordance with a preferred embodiment of the invention. [0010]FIG. 4 is a diagram illustrating the production of a view reflection vector from a composite surface angle vector in accordance with a preferred embodiment of the invention. [0011]FIG. 5 is a diagram illustrating the calculation of the view reflection vector. [0012]FIG. 6 is a diagram of a preferred hardware embodiment of the present invention. [0013]FIG. 7 is a diagram illustrating an AP translation unit in accordance with a preferred embodiment of the invention. [0014]FIG. 8 is a diagram illustrating a preferred hardware embodiment of the per pixel operation of the present invention. [0015]FIG. 9 is a diagram illustrating the preferred embodiment of the point light operations of the present invention. [0016] The present invention provides a method and system for the efficient calculation of complex per-pixel lighting effects in a real-time computer graphics system. For the purposes of this disclosure, the term “real-time computer graphics system” is defined as any computer based system capable of or intended to generate images at a rate greater than or equal to 10 images per second. Some examples of real-time computer graphics systems include: stand-alone console videogame hardware, 3D graphics accelerator cards for PC's and workstation class computers, multipurpose set top boxes, virtual reality imaging devices, and imaging devices for commercial or military flight simulators. All of the above-mentioned systems are likely to benefit from the increased image quality afforded by the methods and practices of the present invention. [0017] As used herein, the term “angle-proportional” is defined as a characteristic of a 2D vector wherein the length of the 2D vector is proportional to the angle between a 3D direction vector (corresponding to said 2D vector) and a 3D axis vector (usually representing the z-axis of a pre-defined coordinate system). [0018] As also used herein, the term “view coordinate system” is defined as a 3D coordinate system (which can be defined by a 3D position vector and at least three 3D direction vectors) that represents the position and orientation from which a 3D scene is being viewed. [0019] As further user herein, the term “view vector” is defined herein as a 3D vector representing the forward direction from which a scene is being viewed. The view vector is usually directed, either positively or negatively, along the z-axis of the view coordinate system and is expressed in world-view coordinates. [0020] As further used herein, the term “current polygon” is defined herein as the polygon that is currently being operated on by the methods of the present invention. [0021] Lastly, as used herein, the term “current pixel” is defined herein as the pixel within a polygon surface currently being operated on by methods of the present invention. [0022] The present invention comprises two areas of execution within a computer graphics system: per-polygon operations and per-pixel operations. The per-polygon operations of the present invention are performed once for each polygon in a scene to which the present invention is applied. Likewise, the per-pixel operations of the present invention are performed for each drawn pixel on a polygon surface wherein the aforementioned per-polygon operations are assumed to have been previously applied to said polygon. Additionally, the present invention provides a method to enable accurate real-time calculation of point light vectors useful for advanced lighting strategies. Most of the per-polygon and per-pixel operations of the present invention are detailed in U.S. patent application Ser. No. 09/222,036 filed on Dec. 29, 1998, in the name of David J. Collodi, the disclosure of which is hereby incorporated by reference. The operations are detailed herein for purposes of consistency and example. [0023] The per-polygon operations of the present invention are performed in order to provide a set of angle-proportional surface angle vectors to be utilized within the per-pixel operations. For the purposes of simplicity, this disclosure shall assume the existence of a polygon to be rendered wherein said polygon provides a 3D surface normal vector for each of its vertices and said polygon is the current polygon. The surface normal vectors are used to collectively specify the amount of curvature along the polygon surface. [0024] First, the surface normal vectors of the current polygon are rotated to correspond to the direction of the view coordinate system. It is well known in the art that a 3D coordinate system (or rather the translation to a particular 3D coordinate system) can be represented by a 4×4 matrix. A 4×4 matrix can represent both rotational and positional translations. Since the rotation of surface normal vectors requires only rotational translations, a 3×3 matrix, M, is used. Each surface normal vector, N [0025] The above calculation is performed for each surface normal vector belonging to the current polygon, N [0026] Next each rotated vector, R [0027] An optional step is to limit the vectors that are far from the view angle. The direction of vectors at or near 180° from the viewer is unstable. It is therefore advantageous to limit the direction and distance of these vectors. An example of a basic limiting method is detailed in the following disclosure. First, a 3D vector U is obtained where the direction of U is perpendicular and normal to the plane of the polygon (i.e., U is the “real” polygon surface normal vector). Next the x and y components of U are scaled by dividing each component by the larger component (either x or y) of U. Then the scaled x and y components of U are doubled. The scaled x and y components of U form 2D vector u which represents the angle-proportional direction of the polygon surface at (or slightly greater than) 180°. Angle-proportional n vectors with large angles relative to the viewer (which can easily be derived from the z-coordinate of the corresponding R vector) are interpolated with the u vector weighted on the relative angle (to viewer) of the n vector. [0028] A further optional step at this point is to calculate a 2-dimensional bump map rotation value. Since a bump map, in whatever format it is presented, is basically a 2D texture map, the map itself has its own local coordinate system, i.e., which direction is up, down, left, right, etc. The bump map is mapped arbitrarily onto the polygon surface and therefore may not necessarily share the same orientation as the view coordinate system. Since bump map perturbations will be done in 2D space, only a 2D rotation value is necessary to specify the 2D rotation of the bump map coordinate system relative to the view coordinate system. A simple method of obtaining said bump map rotation is to perform a comparison of the bump map orientation (using the bump map coordinate values provided at each polygon vertex) to the screen orientation of the translated polygon (since the screen orientation corresponds directly to the view coordinate system). Two 2D bump map rotation vectors are required to specify the translation from the bump map orientation to the view orientation. The use of any known techniques to obtain said 2D vectors is acceptable. In one embodiment of the present invention, the bump map orientation vectors are used to rotate each of the above mentioned 2D surface angle vectors, n [0029] The next section details the per-pixel operations of the present invention. As previously stated, the per-pixel operations of the present invention are performed for at least each visible pixel on the screen surface of the current polygon during the scan-line conversion of the polygon. Note that the per-pixel operations detailed herein need not be performed concurrently with the drawing of the current polygon to video RAM. Alternate embodiments of the present invention perform per-pixel lighting operations prior to final rendering to screen memory. Additional embodiments of the present invention perform per-pixel lighting operations after color values have been placed in screen memory. It is, however, a preferred method to perform per-pixel lighting operations concurrently with the drawing of color values to screen memory. [0030] Initially, the previously mentioned set of 2D surface angle vectors is interpolated from their vertex values, n [0031] Next, the aggregate surface angle vector, n, is combined with a 2D bump map vector, b. In one embodiment of the present invention, the bump map vector is obtained from a given bump map and accessed by interpolated bump map coordinates given at the polygon vertices in accordance with standard vertex mapping techniques well-known by those skilled in the applicable art. The 2D bump map vector may be obtained directly from the texel values stored in the bump map. Alternately, the 2D bump map vector may be calculated from retrieved texel values stored in the bump map. One well-known example of said bump map vector calculations is storing relative height values in the bump map. Height values are retrieved for the nearest three texel values. Assuming that the texel at coordinates x, y (t(x,y)) maps to the current pixel, then texels t(x,y), t(x+1,y), and t(x, y+1) are loaded from the bump map. Since each texel contains a scalar height value, the 2D bump map vector, b, is calculated from the differences in height values in the following manner: [0032] An alternate method for storing bump map data involves storing a polar representation of the bump map vector at each texel. The polar representation comprises two fields, one for the 2D angle of the bump map vector and another for the magnitude of the bump map vector. A preferred method of retrieving the 2D bump map vector from said polar representation is through the use of a lookup table. The direction and magnitude (or functions of those values) values are used to index a lookup table which returns the appropriate 2D bump map vector. The primary advantage of storing bump map vectors in polar representation is that the rotation of polar vectors is easily accomplished. In the aforementioned embodiments in which the bump map vector is rotated to view orientation, said rotation is facilitated by storing bump map vectors in polar representation. Rotating a polar vector involves providing a scalar angle of rotation (for example an 8-bit number where the value 256 is equivalent to 360°) and simply adding that number to the rotation value of the polar vector. [0033] For added image quality, map based bump map values may be additionally interpolated with any well-known texel interpolation scheme such as bi-linear or tri-linear interpolation. For direct mapping schemes, i.e., where texels contain 2D bump map vectors, the vector values given at each texel are interpolated. Alternately for indirect mapping schemes, such as the height map detailed above, it is desirable to first calculate all necessary 2D bump map vectors and subsequently interpolate those vectors. It should be noted that one or more 2D bump map vectors may be combined to produce the final b vector. The ability to easily combine and aggregate multiple bump maps and/or to combine bump map perturbation with a variable surface normal is an advantageous feature of the present invention since this technique provides for a great deal of flexibility, reusability and decreased memory costs in many 3D graphics applications. [0034] In an alternate embodiment of the present invention, 2D bump map values are calculated procedurally from a function of the surface position (and other optional values). Procedural texture/bump map techniques offer the advantages of flexibility and minimal memory usage balanced with the cost of additional calculation. Alternately, if bump mapping is not selected for the current polygon, a null b vector (0,0) can be used. In this case, it is not necessary to combine the bump map vector with the n vector and the combination step may therefore be skipped. For the purposes of clarity and continuity of the example detailed herein, a b vector of (0,0) will be used for cases in which bump mapping is not used. [0035] Once the bump map vector, b, is arrived at, it is combined with the n vector through vector addition to produce the composite surface angle vector, c: [0036] The c vector represents the composite orientation of the polygon surface at the current pixel with respect to polygon curvature and bump map perturbation. FIG. 3 demonstrates the combination of surface angle vector n [0037] Once the c vector is arrived at, the view reflection vector is next calculated. The view reflection vector represents the direction the view vector reflects off of the surface at the current pixel. Since the 2D vector coordinate space is angle-proportional to the view vector, the direction of the view vector is located at coordinates (0,0). Consequently, the 2D view reflection vector, r, reflected around the c vector (which represents the current pixel surface orientation) is simply the c vector doubled: [0038]FIG. 4 illustrates the production of view reflection vector r [0039] The above calculation is accurate provided that the direction of view is always directed along the z-axis of the view coordinate system. For most applications, this assumption is accurate enough to produce visually sufficient results. However, the exact view direction varies in accordance with the screen position of the current pixel since its screen position represents an intersection between the view plane and the vector from the focal point to the object surface. The preceding scenario in which the view direction is allowed to vary with screen coordinates is commonly referred to as a point viewer. In cases in which point viewing is desired, the view reflection vector, r, must be calculated in an alternate manner. First the 2D displacement vector of the screen coordinates of the current pixel and the screen coordinates of the center of the screen must be found. Assuming the screen coordinates of the current pixel are represented by 2D vector p, and the screen coordinates of the center of the screen are represented by 2D vector h, the 2D displacement vector, d, is calculated as follows: [0040] Next, 2D displacement vector d is converted to an approximately angle-proportional 2D offset vector, o. The most straightforward way to convert d to o is to multiply d by a scalar value, y, representing the ratio of the viewing angle to the screen width. The viewing angle represents the total angle from the focal point to the two horizontal (or vertical) edges of the screen and should be given in the same angle-proportional scale as other angle-proportional vectors (in this example, a value of 1.0 representing [0041] In order to calculate view reflection vector, r, in the case of a point viewer, the r vector is positively displaced by o. The formula for r is: [0042] The calculation is illustrated in FIG. 5, where view vector r [0043] It should be noted that the above formula is only an approximation of the true view reflection vector. However, the approximate view reflection calculated by the preceding formula is able to produce visually consistent and convincing images with little or no discernable loss in image quality. In alternate embodiments of the present invention, the r vector, as opposed to the c vector, is used to address an environment map as previously detailed. [0044] Once the 2D composite surface angle vector and view reflection vector are calculated, they are next transformed into normalized (unit length) 3D vectors. The 2D composite surface angle vector, c, is transformed into normalized 3D composite surface vector C. Likewise, 2D view reflection vector, r, is transformed into normalized view reflection vector A. The conversion from a 2D angle-proportional vector to a normalized 3D vector by mathematical calculation is computationally expensive in terms of hardware complexity and computation time. Therefore, it is a preferred practice of the present invention to perform said conversion from 2D angle-proportional vector to normalized 3D vector with the aid of a lookup table. The use of a lookup table offers the advantage of being able to produce normalized composite surface and reflection vectors without using a square root operation. The complexity of the square root operation combined with the difficulty of calculating 3D composite surface and view reflection vectors has heretofore prohibited practical real-time calculation of complex lighting effects. Methods of the present invention using lookup tables, therefore, represent a significant improvement in the real-time calculation of complex per-pixel lighting effects. [0045] A preferred lookup table method is to use fixed point x and y coordinates of an angle-proportional vector to directly access a 2D lookup table wherein said lookup table contains normalized 3D vectors. The vectors contained in the lookup table may be stored in either floating point or fixed-point format. For matters of efficiency, however, it is a preferred practice of the present invention to store 3D lookup table vectors in fixed-point format. For example, a fixed-point format of 8 bits per vector component, i.e., 24-bits per 3D vector, would provide sufficient accuracy while minimizing the size of the lookup table. Fixed point 3D vectors obtained from the lookup table can easily be converted to floating point format for further calculation if necessary. In order to further enhance visual consistency, lookup table vectors can be interpolated using any of a number of well-known interpolation techniques including, but not limited to, bi-linear and tri-linear interpolation, quadratic interpolation and cubic interpolation. The size of the lookup table can be additionally decreased due to the fact that the coordinate system is symmetric about the x and y axis. Therefore the lookup table need only cover the positive x/positive y quadrant. To utilize such a lookup table, negative x and y coordinates (in the 2D vector used to address the table) are first negated and the 3D vector is retrieved (and optionally interpolated) from the table. Then the corresponding x and/or y coordinates in the 3D vector are negated provided that the x and/or y coordinates of the 2D addressing vector were originally negative. Since several vector additions may be performed on angle-proportional vectors, the final c and r vectors can have lengths greater than 2.0 (equivalent to 180°). Therefore, the 2D lookup table must at least cover coordinate values ranging from 0 to 2.0. A 512×512 map should be of sufficient accuracy to cover such a range, however larger maps may be implemented depending on the desired accuracy. [0046] An alternate embodiment of the present invention utilizes a one-dimensional lookup table. The lookup table is addressed by the square of the length of the above-mentioned addressing 2D angle-proportional vector. Each lookup table element contains two elements: a z-value and a scalar value s. The z-value is used as the z-coordinate for the resultant 3D vector while the s value is used to scale the x and y values of said addressing 2D vector yielding the x and y values of said resultant 3D vector. The above-mentioned one-dimensional lookup table strategy provides a significant memory savings over the aforementioned 2D lookup table, but also incurs a higher computational cost. [0047] The lookup table strategies detailed above are presented for the purpose of example only and, as can be recognized by someone skilled in the applicable art, any adequate lookup table strategy may be employed without departing from the scope of the present invention as defined by the appended claims and their equivalents. [0048] Regardless of the calculation method applied, the conversion of 2D vectors c and r to normalized 3D vectors produces unit-length 3D composite surface vector C and unit-length 3D view reflection vector A. The C and A vectors can then be used in calculating diffuse and specular light coefficients for any number of light sources. Given a light source whose direction is represented by unit-length light source vector L, the diffuse coefficient, Cd, of said light source at the current pixel is given by: [0049] While the specular coefficient, c [0050] The specular coefficient value c [0051] A further alternate embodiment utilizes a one-dimensional lookup table as in the previously mentioned lookup table strategy. As with the aforementioned strategy, a z-value and scalar s value are provided by the lookup table. In this embodiment, however, the s value is not used to scale the x and y values of the addressing vector. Rather, the addressing vector, with the aforementioned z-value included, is used as a 3D vector in the above mentioned diffuse and/or specularity dot product calculation. The result of the dot product calculation is then scaled by the s vector to produce the correct shading value as in the following equations: [0052] Once diffuse and specular components have been calculated, they may be used as scalar values to apply diffuse and specular lighting to the current pixel. Standard color based pixel lighting algorithms utilizing scalar light coefficients are well-known to those skilled in the art. Any such lighting algorithm (which requires scalar diffuse and specular coefficient values) may be applied to modulate the color of the current pixel. [0053] A further aspect of the present invention applies to the calculation of point light source direction vectors. As opposed to directional light sources, where the light source direction is constant within the frame, the direction of point light sources is variable across a surface. The direction at which a point light strikes a surface is determined by difference between the position of the surface and the light source. A prior art approach to the calculation of point light source direction vectors involves normalizing the difference vector between the light source position and the surface position. Since standard vector normalization requires computationally expensive division and square root operations, the application of said approach to the calculation of point light source direction vectors is infeasible for efficient real-time operation. A method is presented for the accurate calculation of point light source direction vectors that does not involve division or square root operations. [0054] According to the present invention, a 3D difference vector, D, is obtained for at least every drawn pixel. The difference vector is found by the following formula: [0055] where P is a 3D vector in the view coordinate system representing the location (in 3D space) of the point light source and S is a 3D vector in the view coordinate system representing the location (in 3D space) of the polygon surface at the current pixel. The preceding vector subtraction may be performed on a per-pixel basis wherein the S vector is appropriately updated for each pixel. Alternately, a set of point light source direction vectors, D [0056] Once the D vector is obtained for the current pixel, a scalar value, k, is calculated where: [0057] In a preferred embodiment of the present invention, a lookup table is used in the determination of the k value. A preferred one-dimensional lookup table contains k values (in fixed or floating point format) and is addressed by a function of D*D. The D vector, however, may be of arbitrary length, thereby requiring a large lookup table to determine accurate k values. Therefore, in a preferred practice, the D vector is scaled prior to the calculation of the k value. A preferred method for the scaling of the D vector is presented herein. First the largest component (either x, y, or z) of the D vector is found, i.e., max(x, y, z). Next an exponent value, n, is found from the max component value by: [0058] where m is said maximum component value of D. Next a 3D scaled difference vector, E, is calculated where: [0059] A scalar length value, g, is next calculated by: [0060] This scheme is advantageous since the n value can be found directly from the exponent field of a number in a standard floating point format and division by a power of two simply requires an exponent subtraction for floating point numbers. [0061] Finally, the above mentioned g value is used to obtain k from the preferred lookup table method detailed previously. Once k and E have been calculated, lighting equations may now be carried out for the point light source. As defined above, the c [0062] where vectors C and A are the 3D composite surface vector and 3D view reflection vector as previously defined. Now lighting coefficients for a point light source have been calculated without using costly square root or division operations. This process allows for point lighting to be efficiently and practically applied in real-time image generation. [0063] A novel and useful aspect of the present invention as disclosed above is that, in certain embodiments, it allows shading data, such as light and surface normal vectors, to be specified in a recognized standard format. In many well-known lighting systems, such as Gouraud and Phong shading, lights are specified with 3D vectors (specifying normalized direction for parallel lights and position for point lights) along with color and brightness information. Likewise, in the aforementioned lighting systems, surface curvature is specified by providing a normalized 3D surface angle vector for each polygon vertex. Also, a common format for bump map data, which is well-known to those skilled in the art, is to use a height value for each bump map texel, as detailed previously in this disclosure. The use of a common interface allows for quick cross-platform development by way of a standard programming interface. Most current 3D programming interfaces, such as OpenGL and DirectX, provide functionality for specifying standard shading data (light and surface normal vectors in the above-mentioned standard format) for lighting in 3D graphics applications. Many current programming interfaces also contain support for standard bump maps as well. [0064] The methods and operations of the present invention do not require additional, or alternate, inputs other than the above-mentioned standard shading data, i.e., light and surface normal vector data. In the present invention, vertex normal values are specified as normalized 3D vectors and light vectors are specified in a compatible format, i.e., a 3D vector for direction or position as well as additional color and brightness information. Bump maps may be given in any of several standard formats wherein no additional, algorithm-specific information is required. The ability of the present invention to operate accurately and efficiently with standard inputs is a primary advantage. Most well-known 3D shading speed-up methods require algorithm-specific input data in order to perform correctly, thereby limiting the application of said speed-up methods to custom programming interfaces. Most 3D graphics software developers have experience in standard 3D programming interfaces and develop cross-platform applications wherein the use of said standard 3D programming interfaces is a necessity. The use of non-standard programming interfaces demanded by many 3D lighting algorithms serves as a severe limiting factor to their widespread use in industry applications. Use of the present invention is advantageous since it requires no additional, “non-standard” input data to operate correctly and efficiently. Therefore, the features of the present invention, implemented in either software or custom hardware, can be accessed by current programming interfaces without requiring software developers to produce additional, application-specific code. The present invention provides a universal shading interface whereby cross-platform applications can take advantage of the advanced lighting features of the present invention on platforms that support them, while still working correctly, i.e. defaulting to simpler shading algorithms such as Gouraud shading, on platforms that do not support advanced lighting. The methods and operations of the present invention provide for the ability to accurately and efficiently utilize advance shading techniques which are accessible through a standard 3D programming interface. [0065] In order to provide maximum rendering speed and efficiency, a hardware implementation of the present invention is preferred. Since the methods of the present invention are not exceedingly complex, they are able to be implemented without excessive hardware expense in a number of 3D graphics systems including, for example, consumer-level PC graphics accelerator boards, stand-alone console video game hardware, multi-purpose “set top boxes,” high-end workstation graphics hardware, high-end studio production graphics hardware, and virtual reality devices. Although a hardware implementation is preferred, those skilled in the art will recognize that alternate embodiments of the present invention may be implemented in other forms including, but not limited to: as a software computer program, as a micro-program in a hardware device, and as a program in a programmable per-pixel shading device. [0066] The following sections describe preferred hardware implementations for the per-polygon, per-pixel, and point lighting operations of the present invention. The hardware implementation provided is used as part of, and assumes the existence of, a 3D graphics processing hardware element (such as a 3D graphics accelerator chip). The per-pixel (and point lighting) operations of the present invention serve to provide diffuse and/or specular lighting coefficients for one or more light sources. These lighting coefficients may subsequently be used in shading hardware to scale the corresponding light source colors and to use said light source colors to modulate pixel color. Techniques for utilizing light source colors and light coefficients to modulate pixel colors are numerous and well known to those skilled in the art. It is the objective of the present invention to provide an efficient method and system that produces normalized 3D composite surface and view reflection vectors and consequently produces diffuse and/or specular light coefficients for one or more light sources on a per pixel basis. Therefore, it is outside the scope of this disclosure to provide a detailed description of the above-mentioned shading hardware although it should be noted that the preferred hardware embodiment of the present invention is designed to work in conjunction with dedicated shading hardware. [0067]FIG. 6 shows a diagram of a preferred hardware implementation of the per-polygon operations of the present invention. The hardware per-polygon operations assume the presence of a current polygon, a set of 3D surface normal vectors (N [0068]FIG. 7 shows a block diagram of an AP translation unit which converts a 3D vector into an 2D angle-proportional vector [0069] After the above-mentioned set of R vectors has been transformed to the above-mentioned set of n vectors, the n vectors are then stored, preferably in a local memory, to be later used during the per-pixel operations of the present invention. Alternate embodiments calculate a bump-map rotation at this stage of operations. In said alternate embodiments, the set of n vectors and the set of light source vectors (L [0070]FIG. 8 shows a logic diagram for a preferred hardware embodiment of the per-pixel operations of the present invention. The hardware per-pixel operations assume the presence of a current polygon, a set of 3D light source vectors expressed relative to the view coordinate system, and a set of 2D surface angle vectors (n [0071] At vector addition unit [0072]FIG. 9 shows a logic diagram of the point light operations of the present invention. Point light operations are performed on the same (per-pixel) basis as the above detailed per-pixel operations. In a preferred embodiment, the point light operations are performed in parallel with the per-pixel operations. Alternate embodiments perform point light operations in series with per-pixel operations. The hardware point light source operations assume the presence of a current polygon, a set of 3D surface position vectors (S [0073] The above section details a practical and efficient hardware configuration for the real-time calculation of normalized 3D surface and reflection vectors where the surface direction is interpolated and dynamically combined with bump map values on a per-pixel basis. Likewise, the hardware described above calculates, in real-time, diffuse and specular lighting coefficient values for one or more directional light sources from a dynamically variable surface. Furthermore, the above hardware configuration is able to calculate, in real-time, diffuse and specular lighting coefficient values for one or more point light sources from a dynamically variable surface. The embodiments described above are included for the purpose of describing the present invention, and as should be recognized by those skilled in the applicable art, is not intended to limit the scope of the present invention as defined by the appended claims and their equivalents. Referenced by
Classifications
Legal Events
Rotate |