CA1304824C - Pipelined lighting model processing system for a graphics workstation's shading function - Google Patents

Pipelined lighting model processing system for a graphics workstation's shading function

Info

Publication number
CA1304824C
CA1304824C CA000581529A CA581529A CA1304824C CA 1304824 C CA1304824 C CA 1304824C CA 000581529 A CA000581529 A CA 000581529A CA 581529 A CA581529 A CA 581529A CA 1304824 C CA1304824 C CA 1304824C
Authority
CA
Canada
Prior art keywords
processing
vertices
floating point
color
vertex
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CA000581529A
Other languages
French (fr)
Inventor
Jorge Gonzalez-Lopez
Bruce C. Hempel
Bob C.C. Liang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Application granted granted Critical
Publication of CA1304824C publication Critical patent/CA1304824C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models

Abstract

Abstract of the Disclosure A lighting model processing system for a computer graphics workstation's shading function includes multiple floating point processing stages arranged and operated in pipeline. Each stage is constructed from one or more identical floating point processors. The lighting model processing system supports one or more light sources illuminating an object to be displayed, with parallel or perspective projection. Dynamic partitioning can be used to balance the computational workload among various of the processors in order to avoid a bottleneck in the pipeline. The high throughout of the pipeline system makes possible the rapid calculation and display of high quality shaded images.

Description

~3~

-: : . PIPELINED LIGHTING MODEL PROCESSING SY5TEM
FOR A ~;RAPHICS WOR~CSTATIO~ ' S SHADI~G FUNCTION

Background of the_ Invention The present invention relates, generally, to the field of computer yraphics. Computer ~raphics display systems, e.g. CAD/CAM graphic~ workstations, : are widely used to generate and display images of obje~ts for scientific, engineering, manufacturing and ~; ot~er applications.
~ In such computer grap~ics systems, surfaces :~ . of an object are usually represented by a polygon mesh.
A polygon mesh is a collection of vertices, edges, and polygon~. The vertice~ are connected by edges, while polygons can be thought of as se~uences of ed~es or of vertices. To present a visual image of an object on : . the viewing screen of 1he display which is more realistic in appeaxance than the corresponding polygon mesh, procedures have been developed for removing hidden surfaces and shading and adding texture to viqible surfaces~
The present invention relates to implementation of a shading function in a graphics workstation and more specifically to the computer ~- proce~sing and display of shaded images in which a 32~

lighting model is used. Effects which may be ta~en into account in such a lighting model include ambient lighting, diffuse and specular reflection effect~, the number, position, intensity a~d hue of the light S ~ources, parallal and p rspective projections, and attenuation of the light due to the distance of dlfferent portions of an object being modeled from the viewer (depth cueing).
The general system architecture o~ shading hardware for a graphics workstation i5 depicted, in block diagram form, i~ Figure 1. The overall shading function system 10 includes a lighting model processinq system 12, a shading processing system 14, a video pixel memory 16 and a display monitor 18.
The lighting model processing system 12 ' ~alculates the color intenslties te.g. red, green and blue components~ at the vertices of each polygon for a specified liqhting model. The shading proce~sing system 14 uses the information from the lighting model processing system to calculate the color intensities of pixels interior to visible polygons and sends this information to the video pixel memory 16. The display monitor 18 displays the shaded image stored in the video pixel memoryO
In prior graphics worXs~ations, calcu~a~ion of the effects due to lighting has been performed by some type of general purpose processor. This approach, while having the advantage of using commonly available "off the shelf" components, suffers the disadvantage of being slow, since the entire process is performed by a ~ingle general purpose processing element.

Summarv of the Invention'' .
According to the present invention, S Calculation of the effects due to lighting is done by a :~3~3%~

number of identical ~loating point processing elements that are connected and operated in a pipeline fashion.
The pipeline configuration used may be purely serial, or it m~y be ~erial with certain stages of the ~ipeline ~ontaining parallel arrangement~ of the same identical processing element.
By using a pipeline arrangement of multlple identical processing elements to perform the computation-intensive lighting model calculations, a many-fold improvement in throughput is achieved. By dramatically increasing the number of polygons/second the system can pro~e~s, it is possible to uqe much finer polygonal meshes to represent any given object, and the use of finer meshes yield~ shaded images of 15 .much greater realism~ Thus, the performance increase afforded by the use of the present invention manifests itself to the workstation user in the forms of improved interactivity and higher image quality.
Accordingly, it is a principal object of the present invention to provide a method and apparatus for performing lighting model processing, as part of a shading function in a computer graphics display system, which exhibits higher ~hroughput than heretofore available.
Another object is to provide a lighting model processing sy~tem which facilitates higher ima~e quality and realism and improved interactivity.
Yet another object is to provide a lighting model processing system which iq readily assembled, and is fast, efficient and versatile in operation.

Brief Description of the Drawin~s These and other objects, features and advantages of the present invention will be more readily understood from the following detailed 13~4E~ 4 description, when read in conjunction with the accompanying drawingR in which:
Figure 1 i~ a block diagram depicting a hardware implementation of a shading function in a computer graphics workstation:
Figure 2 is a bl~ck diagram of a generalized embodiment of the lighting model processing system of the present invention, ; Figure 3 depicts a preferred embodiment of the processor of the'present invention;
Figura 4 is a block diagram depicting a single light source configuration of a lighting model processing system of the present invention;
~ Figure 5 depicts an alternate single light source configuration of the lighting model processing system of Figure 4, and Figure 6 depicts a multiple light source configuration of a lighting model processing system of the present invention.
Various embodiments of the invention, including specific structural, programming and functional details, are described hereinaf~er. These specifics are merely representative and are not intended to be construed as limiting the ~rinciples of the invention, the scope of which is defined by the claims appended hereto.

Detailed _escri~tion The lighting model processing system of the present invention consists of multiple floating point proce~aing qta~es connected in series and operated in pipeline fashion. Each stage preferably comprises one or more identical floatinq point processors (also referred to a3 processing elements). Each stage is considered to be a separate processin~ unit w~ich ~3~132~

per~orms its particular function(s) concu~rently with the other stage~, thereby producing a marked increase in throughput.
Figure 2 portrays a first embodiment of the lighting model calculation sy~tem 12 consi-~ting of four floating point processing sta~es 20, 22, 24 and 26, in pipeline. First stage 20 and second stage 22 are used for lighting model calculations~ Third stage 24 is ; employed to perform a~ projection transformation and to map from a viewing space to a ~creen space and fourth stage 26 ic used for depth cueing, color mapping and color clipping. The func~ions of these different stage~ are described in more detail hereinafter. The ~umber and order of ~tages, as well as the partitioning of functions among the stages, may vary from that sho~n in Figure 2~
As illuctrated in Figure 2, the input to lighting model processing system 12 consists of:
x, y, z which represent the coordinates, in viewing space, of the vertices of a polygon and a, b, c which represent the X, Y, and Z
components of the normal at each of the vertices of the polygonO
These inputs are in floating point format and are produced by a geometry processor ~not hown) earlier in the graphics display syste~. The output of the lighting model processing system 12 consists of X5, Ys, Zs which represent the screen coordinates of the vertices of the polygon and C"R, C"G, C"B which represent the color intensity values (R denoting the r~d component, G
~enoting the green component and B denoting the blue component) to be displayed at each vertex of the polygon.
The output screen vertex coordinates and RGB
intensity values are integers, with the intensity 13~g829L

values being in the range of the capabilities of the display sy~tem. The output of lighting model processing sys~em 12 is provided to a shading processing syst~m 14, as shown in Figure 1, wherein the intensity value at the vertices are, in efect, interpolated across ~he visible face of ~he polygon so that a realistically shaded image of the object may be displayed~

INPUT PREPROCESSING
Computer g~aphics display systems must handle l-sided and 2-sided sur~aces depending upon the nature of the o~ject being modeled. ~o simplify the operation ~ of the lighting model processing system of the present invention, a preprocessing step is implemented in the preceding geometry processor to ensure that all of the input polygons to the lighting model processing system are 1-sided polygons with normals pointing towards ~he viewer, The preprocessing step, which can be implemented ln ~oftware or hardware or a combination thereof, processes incoming polygon~ based on their surface normal in the following way:
1. l-sided surface --a. If the dot produc~ of the poly~on normal and the vector from the object to the viewpoint is positive, then the polygon is facing forward, and the polygon data is sent to the lighting model processing system as input, b. If the dot product o the poiygon normal and the vector from the object to the viewpoint is negative, then the polygon is facing backward, and the polygon is discarded before reaching the lighting model processing system.

~IL3~2g~
2. 2-sided surface --a. If the dot produc~ of the polyqon normal and the vector from the object to the viewpoint i~ positive, then the polygon is facing forward, and the polygon data is sent ~o the lighting model processing sy~em as input, b. I the dot product of the polygon . normal and the vector from the object to the viewpoint is negative, then the polygon is facing bacXward: therefore the vertex normals are rsversed, and the rest of the polygon data iq sent to the lighting model processing system as i~put.
- 15 ~ccordingly, the input to the lighting model processing system has the following format:
xl, yl, zl (vertex 1 coordinates) al, bl, cl (vertex 1 normal) x2, y2, z2 (vertex 2 coordinates) a2, b2, c2 (~ertex 2 normal) . etc.
.
FLOATI~G POINT PROCESSOR
Each stage of the pipelined lighting model processing system of the present invention is composed of one or more identical floating point processors.
The common usage of the same processor results in efficiencies in manufacturing, assembly, operation, programming and maintenance of the lighting model processing sy~tem.
A suitable processor 30 for implementing the lighting model processing system of the present - invantion i9 depicted in Figure 3. Processor 30 i5 a graphics floating point processing element which, in its presently preferred form, comprises a VLSI chip :~L3~4~4 with 32-bit floating point capability. Processor 30 includes: a floating point multiplier 32 (preferably with 2-stages of internal pipeline and special hardwars as~ist~ for calculating in~erses and square roots), a floating point adder 34, algo used as an accumulator or the multiplier 32; a sequencer 367 ~torage 38, e.g.
~AM, for control programs t e.g. microcode, and control logic 40, input FIFO 42 and output FIFO 44 for inter-face purposes, a bank 46 of register~ for storing data (e.g. sixty-four 32-~it registers): a command register 48; and a plurality of multiplexers 50, 52, 54, 56 and ~8. These components of processor ~O are connected as shown in Figure 3. Sequencer 36 controls the operation of the other components and in turn is under microcode control for pro~ram flow, i.e., branch, subroutine call, etc.
The input/output data path is 33 bits wide, and consists of a 32-bit data field and a l-bit flag which indicates whether the data ~ield is a command or a floating point number. A command in the datastream instructs the graphics floating point processor how to use the floating point data that follows it; for in-stance, the datastream to cause a polygon to be shaded would consist of a 'shaded polygon' command followed by the floating point data (vertices and normals at the vertices) that define the polygon to be shaded.
The data put into the input FIFO of a Floating Point Processor i~ produced by a front-end processor of some type or by another Floating Point ~0 ~rocessor. ~n the latter case, the other Floating Point Processor's output FIFO is direct~y connected to the input PIFO. A particularly advantageous approach for interconnecting such identical floating point processors in pipeline and/or parallel arrangements is descrlbed in concurrently filed, commonly assigned ~3~
~I9-86-024 9 U.S. Patent No. 4,876,644, which issued October 24, ].989.

The microcode reads the data from the input FIFO 42, and performs the following functions:
1. if it is a command, it will be passed to a decoding ram to branch to the correct microcode subroutine; in this case, it is also stored in the command register 48, and if the microcode is so programmed, may be passed to the output FIFO 44;
2. if it is floating point data, it can be a. stored in the register bank 46; or b. passed to the multiplier 32 as one of the inputs to the multiplier; or c. passed to the adder (accumulator) 34 as one of the inputs to the adder; or d. passed directly to the output FIFO 44; or e. ignored (not used).

PROCESSOR DATA FLOW

The following example of vector multiplication is helpful in understanding how data flows within processor 30.
Example: Calculate ml~x + m2*y + m3*z where ml, m2, m3 are in the internal registers an asterisk "*" denotes a multiplication operation, and the input data is vector multiplication (command) x (data) (data) z (data) 1. read from the input FIFO 42 the 32-bit words, 2. when the vector multiplication command is encountered (signaled to the hardware by the command bit being on, and the particular value of the 32-bit data field), the microcode branches to ~ A~

~3~t~

a subroutine in the microcode ~torage that performs thc vector multiplication, the address of the subroutine being obtained by the hardware via a branch table;
3. the ~ubroutine reads i~ the next word, in this ca~e the data x7 the data x will be passed to the multiplier input, and ml will be passed to the - other input of the multiplier 32 for the multiplication operation.
1~ 4. the subroutine ~eads in the next word, in this case the data y, the data y will be passed to the muitiplier input, and m2 wil]. be passed to the other input of the multiplier for the multiplication operation, at the same time the product ml~x is passed to the second internal stage of the multiplier.
5. the subroutine reads in the next word, in this case the data z, the data z will be passed to the multiplier input, and m3 will be passed to the other input of the multiplier for the multiplication operation. At the same time, the product of ml~x i5 accumulated in the adder with the value zero.
6. the product~m2~y is passed to the adder 34 and added to ml*x.
7. the product m3~z is passed to the adder and added to ml~x ~ m2~y, and the result is written to the output FIFO 44 In the lighting model processing system of this invention, a typical data stream will be a~ foilows:
begin shaded-polygon (command) xl tvertices data) yl (vertices data) zl tvertices data) 35 ~1 tvertices normal data) ~L3t~ 4 bl (vertices normal data) cl (vertices normal data) x2 (vertices data) .y2 (vertices data) z2 (vertice~ data) a2 (vertices ~ormal data) b2 ~vertice~ normal data) c2 ~vertices normal data) xn (vertices data) yn (vertices data) zn (vertices data) an . (vertice~ normal data) bn (vertices normal data) cn (vertices normal data) end~haded polygon ~command) The processing of the above datastream i5 done as ~oll~ws:
1. the microcode reads from the input FIFO the 32-bit words of the datastream as needed, 2. when the 'begin shaded polygon' command is encountered (signaled to the hardware by the command bit being on, and the particular value of the 32-bit data field), the microcode branches to a subroutine in the microcode storage ~hat processes shaded polygon data, the address of the subroutine being obtained by the hardware via a branch table;
3O the microcode subroutine reads data from the inpu~
FIFO as needed and processes it according to the subroutine's design, when an output is generated, .. it is put into the output FIFO.
4. when the 'end shaded polygon' command is processed, the Floatin~ Point Proces~or jumps to another microcode routine that completes the proce6sing required for shaded polygon.

~IL3~4B;~

P~EFERRED EMBODIMENT/DYNAMIC PARTI~rIO~lNG
In the pre~ently preferred embodiment, a single processor i~ used to implement each staqe of the multiple stage pipeline arrangement~ ~his preferred embodiment will now be de~cribed.
As illustrated in Figure 4, lighting model processing system 60 consists of four separate, identical, floating point processors 62, 64, 66, and ; 68 arran~ed and operated in pipeline. The first and ~econd processors 62 and 64 perform lighting model ~alculations, the third processor 66 performs a projection transformation and maps from viewing space to screen space; and the fourth processor 68 performs depth cueing, color mapping and color clipping.
To optimize performance, pro~ision is made for dynamic partitioning, pref~rably via a distinct dataqtream command read by the microcode inside each of the relevant floating point processors, of the lighting model calculations among the firs~ and second processors in order to maintain a computational wor~load balance between said two processor~. The partitioning o~ the lighting model calculations between the first and second floating point processors varies depending on the number of ligh~ sources in the model.
If there is a single light source as indicated, e.g. by a "Begin Shaded Polygon (Single Light Source)"
datastream command, the first floating point processor 62 determines the light intensity due to the ambient and diffuse lighting efect5, and the second floating point processor 64 determines the light intensity due ~ ~pecular lighting effects~ On the other hand, in the case oP multiple light ~ources (see Figure 6), as might for example be indicated by a "Begin Shaded Polygon (Multiple Light Sources)" datastream command, the fir~t floating point processor 62' determines the ~3~41~

light intensity (ambient, diffuse and specular) due to odd numbered light sources, while the ~econd floating poi~t processor 64' determines the light intensity (ambient, diffuse and specular) due to even numbered light source~. ~his is don~ to evenly divide the work to be performed between these two floating point proce~sors and ~hereby avoid creating a bottleneck in the pipeline.
More particularly, when there is only one light source in the lighting model (see Figure 4):
aO the first floating point processor 62 calculates for each vertex the intensity d~ to ambient light and diffuse reflection, and then passes this value to the second floatiny point processor:
b. the second floating point ~rocessor 64 calculates for each vertex the intensity due to specular reflection, and adds this result to the value passed to it by the first floating point processor, the second floating point processor then sends the data to the next floating point processor, c. the third floating point processor 66 performs a projection transformation and maps the coordinate data from viewing space to the screen space of the display system, passing this data to the next floating point processor, and d. the fourth floating point processor 68 does the calculations for depth cueing, color mapping and color clipping.
When there are multiple light.sources in the lighting model (see Figure 6), a. the first floating point processor 62' calculates for each vertex the intensity due to ambient light, and diffuse and specular reflection o~ the odd numbered (#1,~3,~5,#7, etc.) light 13~ 4 sources and then passes this value to the second floating point proce~sor:
b. the second floating point processor 64' ~alculates for each vertex the întensity due ~o ambient light, and diffuse and specular reflection of the even numbered (#2,#4,~6t#8, etc.) light sources and then adds this result to the one passed to it by the first floa~ing point ; processor; it then passes this data to the third floating point processor;
c~ the third floating point processor 66 performs a projection tran~formation ~nd maps the coordinate data Çrom viewing space to the screen space of the display system, passing this data to the next floating point proc~essor, and d. the fourth floating point processor 68 does the calculations forldepth cueing, color clipping and color mapping.

Figure 5 depicts an alter~ate hardware embodi-ment 61 of the lighting model processing system of the present invention. In this alternate embodiment, the first and second stages of the system are each composed of three of the identical proce~sors arranged in parallel. Each of these processors may be advanta-geously used to operate on a different color component~
By way of example, a single light source modeling ~ystem is shown in Figure 5 wherein first stage floa~ing poin~ processor 62R calculates ambient and diffuse lighting effects with regard to a red light component and passes the resulting red component intensity value to a second stage floating point processor 64R which calculates the specular reflection effect on the red component and adds that to the vaiue ~3~

received from floating point processor 62R. In a similar fashion, first stage 10ating point proces~or 62G and aecond ~tage floating point processor 64G per~
form lightin~ model calculations on a green component, while first stage floating poin~ procsssor 62B and second ~tage floating point processor 64B perform the lighting model calculations on a blue component. The remainder of the system (i.e. the third & fourth stages) are unchanged.
GENERAL LIGHTI~G MODEL
For illustrative purposes, a RGB (red, green, blue) color model is used in thi~ description. How-ever, the principles of the invent:ion are applicable to other color models as well, e.g. CMY (cyan, magenta, yellow), YIQ, HSV ~hue, saturation, value), HLS (hue, lightness, saturation) etc. Similarly, the invention is applicable not only to light sources assumed to be at infinity but also to those at a ~inite distance, and to spot lighting and other lighting effects.
Various specific implementations of the liqht-ing model processing system of the present invention will be described hereinaf~er by re~ersnce to pseudo-code used to program the indi~idual processors. The case o perspective projection (viewpoint at a finite distance) will be covered in detail. The simpler case of parallel projection (the viewpoint at infinity) will be covered briefly by mentioning the differences that occur when the viewpoint is at infinity. But first, a generaliæed lighting model taking into account the number of light sources, surface characteris~ics, and the positions and orientations of the surfaces and c~l~rCeS will be developed.
The Viewing Coordinate System adopted here is the right-hand sy~tem. If we considered looking at a display screen, then the x-axis points to the right, y-axis points upward, and ~he z-axis points toward us.
Therefore, - z-coordinate of data z-coordinate o~ viewpoint.
5 . The following are the parameters of the liyhting model with 1 light source:
IaR /* red intensity o the ambient light */
IaG /~ ~reen inten~ity of the ambient light ~/
IaB /~ blue intensity of the ambient light */
IsoulR /~ red intensity of the light source */
IsoulG /~ green intensity of the light source ~/
IsoulB /~ b~ue intensity of the iight source ~/
XaR /~ shading material red ambient constant ~/
~aG /* ~hading material green ampient constant */
XaB /~ shading material blu~e ambient constant ~/
kdR /* shading material red diffuse constant ~/
XdG /~ shading material green diffuse constant ~/
kdB /~ shading material blue diffuse constant ~/
Xs /* shading constan~ for specular reflec~ion ~/
kn /~ shading constant-exponent for cos ~/

For a given vertex x,y,z and normal N = (a,b,c~ to the surface at that vertex, let:
L be the unit vector from the vertex to the light source, i.e. the direc~ion to the poin~ light ~ource;
R be the direction of reflection: and V be the direction to the viewpoint.
Then the shading color/intensity at the vertex is given by the sum of the following three termR
-~ 30 (we only include the R-component, the other two, G and B, can be expressed in a similar fashion) 1. ambient light IaR~XaR

. ~, ~3~ !324 2. diffuee reflection IsoulR~kdR~(L.N) /~ L.N denote the inner product ~/
3. specular reflection IsoulR~ks*((R.V)~*kn) The light sources in the lighting model considered here are assumed at infinity. Their positlons are specified by direc~ional vectors of unit length; the viewpoint is ; at infinity.
For multiple light s~urces, we have multiple terms for ~he second and third items.
Assume there are j light ~ources (1 j =M) where M is the maximum number of light sources to be allowed in the particular implementation.
, 15 ~he shading color/intensity at the vertex is given by the sum of the following three terms:
(we only include the R-component, the other two, G and B, can be expressed in similar fashion) lo ambient light Iar~kaR
2. diffuse reflection - IsoulR~XdR(Ll.N) + ... ~ IsoujR~kdR(Lj.N) /~ L.N denote the inner product ~/
3. specular reflection IsoulR~Xs*((Rl.V)~kn) + ... + XsoujR~ks~((Rj.V)*~kn) A more detailed description of the derivation of the general lighting model can be found, for example, in Foley ~ Van Dam "Fundamentals of InteractiVe Compu~er Graphics" Addison-Wesley 1984, pp S75-578.
Spe~ific exemplary implementati,ons of the lighting model processing system of the present inventiOn will now be presented. First, the lighting model calculations performed by the first and second processors are described for the following cases:
.

., :

:lL3~

single light source -- viewpoint at a finite distance single light source -- vie~point at an infinite distanc~, multiple light source -- viewpoi~t at a finite distance, and multiple light s~urces --~iewpoint at an infinite distance. Then the mappingfunction performed by the third processor, and the depth cueing, color mapping and color clipping functions performed by the fourth processor (which functions are common to all of the above listed cases) are presented.
In the folIowing examples, so~ware is pre-sented in pseudocode, a commonly used notational scheme in which a single asterisk (~) indicates a multi-plication operation, double asterisks (~*) denotes a power term, the symbol ~<) denotes "gets the value of", and in which a comment is bracXeted be~ween /* and ~/.

SINGLE: LIGHT SOURCE ---- VIEWPOINT AT A FINITE DISTANCE
Thi~ is the case of a single light source at a finite distance from the object with perspective projection (i.e. the viewpoint is at a finite distance from the object). The function of calculating the intensity of the vertex is divided among two floatin~
point proce~sors. The first floating point processor calculates the intensity of the ambient light and the diffuse term, and the second floating point processor calculateq the intensity of the specular term~
The input to the first floating point processor ~onsists of six pieces of data:
~0 x,y,z coordinates of the vertex, and the coordinates of the normal to the surface at th~ vertex~ i.e. a,b,c~
The output of the first floating point processor to the second floating point processor for each vertex consist~ of nine words:

- `

~3~

x,y,z coordinates of the vertex, the components of the reflection direction rx, ry, rzs and the partial intensity values (ambient plu~
diffu~e effects) for R, ~, and B.
The pseudocode for the first and second floating point processors, for thi~ case, follow~:
Procedure Intensity Single 12 (viewpoint at a finite , distance) /~ first Floating Po~nt Processor */

Input:
x, y, z /~ coordinates of the vertex ~/
~ a, b, c /~ com~onents of the unit normal */
15:
Output:
x, y, z /~ coordinates of the vertex */
rx, ry, rz /~ components of the ~/
/~ reflection direction */
IntensR /~ r-component of intensity */
IntensG /~ g-component of intensity ~/
IntensB /~ b-component of intensity */

Constants:
~aR /~ red intensity of the Ambient Light */
IaG /~ green intensity of the Ambient Light ~/
IaB /~ blue intensity of the Ambient Light ~/
I oulR /~ red inten~ity of the light source */
IsoulG /~ green intensity of the light source ~/
IsoulB /~ blue intensity of the light source ~/
kdR /~ shading material red diffuse constant */
kdG /* shading material green diffuse constant ~/
kdB /* shading material blue diffuse constant */
XaR /* shading material red ambient constant ~/
XaG /~ shading material green ambient constant ~/

:, '' , .

~3~4~3Z~

kaB t~ shading material blue ambient constant ~/
k~ /* s~ading constant for ~pecular reflection ~/
Xn /~ shading con tant-exponent for cos ~/
uxl,uyl,uzl /~ ht source 1 */
. vx,vy,vz /~ viewpoint */

- Variables:
x, y, z /~ coordinate of the vertex */
. a, b, c /~ components of the unit normal ~/
10 wx,wy,wz /~ vector from vertex to */
/~ light source */
~ormw /~ leng~h of (wx,wy,wz) ~/
rx,ry,rz /~ reflection direction */
normr /~ length of (rx,ry,rzj ~/
tx,ty,tz /* vector from vertex to viewpoint ~/
IntensR /* R compo~ent of intensity ~
XntensG /* G-component of intensity ~/
IntensB /* B-component of in~ensity ~/
normt /~ length of (tx,ty,tz) ~/
innproc /~ inner product of the vec~or */
/~ from light source to the ~/
/~ vertex and unit normal ~/
shadcos /~ cos term for specular reflection ~/
light /~ temporary value ~/
temp /~ temporary value */

lfinitel : /~ code entered when "aegin Shaded ~/
/~ Polygon (single light source)" ~/
/~ command is encountered ~/
io read in data 2. if it is a GESP (end shaded polygon) command, output the command, and exit, .
/~ read in x,y,z coordinates of the vertex, and ~/
/~ calculate vector from the vertex to */
/~ the light ~ource ~/

82~

3. read in x; and wx <---- u~cl ~ x, 4. output x to ~econd Floating Point Processor 5. read in y, and wy <-- uyl - y ,
6. output y to second Floating Point Processor
7. read in z; and wz <---- uzl -- z;
, 8. output z to second Floating Point Processor 9, read in a; /* normal at vertex ~/
10. read in b, 11~ read in c.
12. The Lnner product of the vector from vertex to the light source and the unit normal innproc ~-- wx~a + wy*b + wz~c /~ reflection direction for ~;pecular reflection ~/
13. temp ~-- 2~innproc 14. rx <--temp~a - wx 15. output rx to second Floating Point Processor 16. ry <-- temp~b - wy 17. output ry to se~ond Floating Point Processor 18. rz <-- temp*c - wz 19. output rz to second Floating Point Processor 20. distance between the light source and the vertex a. normw ~ -- wx*wx ~ wy*wy + wz*wz b. normw < -- sqrt (normw) 21. Ambient light R-G-B components a~ IntensR< -- kaR~IaR
b. IntensG~ -- kaG~IaG
c. IntensB~ -- kaB~IaB
22. constant used in the calculation for intensity a. if innproc< -- 0 then light< -- 0, b. else light c -~ innproc/normw 23. calculate Intensity (diffuse reflection) IntensR < -- IntensR ~ IsoulR*kdR~light ~3~

24. output IntensR ~o second Floating Point Processor 25. calculate In~en~ity (diffu.~e reflection) IntensG< ~- Inten~G ~ isoulG*XdG~light 26. output IntensG to ~econd Floating Point Processor 27. calculate Intensity (diffuse reflection) intensB~ -- IntensB + IsoulB*Xd~light ~8. output IntensB to second ~loating Point Processor 29. go to linitel Procedure Intensity ~ingle 22 (viewpoint at a finite distance) /~ second Floating Point Procesor ~/

Input:
x, y, z /* coordinates of the vertex ~/
IntensR j~ r-compone~t of intensity due to ~/
/~ ambient, diffuse */
IntensG /~ g-component of intensity due to ~/
/~ ambient, diffuse ~/
IntensB /~ b-component of intensity due to *
/~ ambient, diffuse ~/
rx,ry,rz /~ reflection direction ~/

Output:
x, y, z /* coordinates of the vertex */
IntensR /~ r component of intensity due to */
/~ ambient, diffuse and specular ~/
IntensG /~ g-component of intensity due to ~/
/~ ambient, diffuse and specular ~/
.IntensG /~ b-component of intensity due to */
/~ ambient, dif~use and specular ~/
'' .
Constants:
IsoulR /~ red intensity of the light source ~/
IsoulG /~ green intensity of the light source ~/

~3~8~

IsoulB /~ blue intensity of the light source */
kdR /~ shading material red diffuse constant ~/
kdG /* shadin~ material green di~fuse constant ~/
kdB /* shading material blue di~use constant ~/
Xs /* shading constant for specular reflection ~/
~n /~ shading constant-exponent for cos ~/
uxl,uyl,uzl /~ light source 1 */
vx,vy,vz /~ viewpoint */

Variable~: -x, y, z /~ coo~dinates of the vertex ~/
a, b, c /~ components of the unit normal */
wx,wy,wz /* vector from vertex to~ ht source ~/
normw 1~ length o~ (wx,wy,wz~ ~/
rx,ry,rz /~ reflection direction ~/
normr /t length of (rx,ry,rz) ~/
tx,ty,tz /~ vector from vertex to viewpoint ~/
IntensR /~ R-component of intensity ~/
IntensG /' G-component of intensity */
SntensB /* B-component of inten~i~y */
normt /~ length of (tx,ty,tz) */
innproc /~ inner product of the vector ~/
/~ ~rom light source to the ~/
: /~ vertex and unit normal ~/
~: 25 shadcos /~ cos term for specular reflection ~/
light /~ temporary value */
temp /~ temporary value ~/

lfinite2: /~ code entered when "Begin Shaded ~/
/~ Polygon (Single Light Source)" */
/~ command ifi encountered ~/
1. read in data 2. if it i~ a G~SP ~end shaded polygon~ command, output the command, and exit:
35 3. read in x, /~ vertex coord ~/

3L3~2~

4. output x;
5. read in y:
6~ output y:
7. read in z;
8. o~tput z;
/~ The foLlowing two steps are ~/
/~ used in specular reflection calculation ~/
/~ for ~iewpoint at a finite distance */
, 9. vector from the vertex to the viewpoint a. tx <-- vx'- x , b. ty <-- vy - y , c. tz ~-- vz - ~ ;
10. di.stance between viewpoint and the v~r~ex a. normt <-- tx~tx + ty*ty + tz~tz b. -normt <-- sqrt (normt) ead in the component of reflection direction*/
11. read in rx 12. read in ry 13. read in rz 14. norm of reflection direction normr <~- rx~rx + ry~ry + rz~rz normr <-- sqrt (normr) 15. calculate t-ne cos of the reflection angle temp <-- tx~rx + ty~ry + tz~rz a. if temp < 0 , then shadcos ~-- 0 b. else 1) temp <-- temp/(normr*normt) 2) shadcos <-- temp*~kn 3) shadcos <~- ks~shadcos 16. read in IntensR
, 17. calculate the specular reflection IntenYR ~-- IntensR + IsoulR~shadcos 18. output IntensR
19. read in IntensG
20. calculate the specular reflection 13(~

IntensG< -- IntensG ~ isoulG~shadcos 21. output IntensG
22. read in IntensB
23. calculate the specular reflection S IntensB c-- ~ntensB + IsoulB~shadcos 24. output Inten~B
25 goto lfinite2 SINGLE LIGHT SOURCE - VIEWPOINT AT I~FINITY
The simpler case where the viewpoint is at infinity (parallel projection) differs from the above described case where the viewpoint is at ~ finite distance, only in the trea~ent of the reflection vector duri~g the calculation of the light due to specular reflection. In the case w~ere the viewpoint . is at infinity, only the z~component is ~sed since the viewpoint direction is gi~en by (0,0,1) in a paralIe projection case.

MULTIPLE LIGHT SOURCES --V~EWPOINT AT A FINITE DISTANCE
In this example, the maximum number of light -sources is assumed to be 10. The function of calrulatang the intensity for each vertex is divided among.two floating point proce~sor~. The first floating point processor processes the color intensity due to light source ~ 3, #5, etc., and the second floating point proce6sor processe~ the color intensity due to light Rource ~2, ~4, #6, etc.
The pseudocode for the first floating point processor, for this case, follows:
Procedure Intensity Multiple 12 (viewpoint at a finite Aistance) /~ f~rst Floating Point Processor ~/

:~3~82~

Inputs:
x, y, z /* coordinates of the vertex *t a, b, c /* components of the unit normal ~/

S Outputs:
x, y, z /~ coordinates of the vertex ~/ -: a, b, c /* component of the unit normal ~/
IntensR /* r-component of intensity ~/
. IntensG /~ g-component of intensity ~/
XntensB /~ b-component of intensity Constants:
IaR / red inte~sity of the Ambient Light ~/
IaG /~ green intensity of the ~mbient Light ~/
IaB ./* blue inten~ity of the Ambient Light ~/
IsoulR /~ red intensity of light C;ource #l *t Isou3R /~ red intensity of light source ~3 */
Isou5R /~ red intensity of light source #S ~/
Isou7R /* red intensity of light source #7 */
Isou9R /~ red intensity of light source #9 */
IsoulG /~ green intensity of light source #l */
Isou3G /~ green intensity of light source #3 */
IsouSG /~ green intensity of light source #5 */
Isou7G /* green intenfiity of light source t7 */
Isou9G /~ gre~n intensity of light source #9 */
I~oulB /~ blue intensity of light source ~1 */
Isou3B /~ blue intensity of light source #3 ~/
Isou5B /~ blue intensity of light source ~5 ~/
Isou7~ /* blue intensity of light source ~7 ~/
: 30 Isou9B /q blue intenctity of light source ~9 ~/
~dR /~ shading material red diffuse constant *j ~dG /~ shading material green diffuse constant */
~dB /~ shading material blue diffuse constant */
kaR /* shading material red ambient constant ~/
kaG /~ shading material green ambient constanL ~/

~3~4~32~

kaB /~ shading material blue ambient con~tant ~/
Xs /~ shading constant for specular reflection ~/
Xn /* shadin~ constant-exponent for cos ~/
uxl,uyl,uzl /* light ~ource 1 ~/
ux3,uy3,uz3 /~ light source 3 ~/
ux5,uy5,uz5 /~ ~ight ~ource 5 ~/
ux7,uy7,uz7 /~ light source 7 ~/
ux9,uy9,uzg /~ light source 9 */
. vx,vy,vz /* viewpoint */
Variables: -wx,wy,wz /~ vector from ~srtex to light source ~/
normw /~ length of (wx,wy,wz) */ ~ -rx,ry,rz /* reflection direction ~/
normr /~ length of (rx,ry,rz) ~/
tx,ty,tz /~ vector from vertex to viewpo.int ~/ -IntenqR /~ R-component of intensity ~/
IntensG /~ G-component of inten~ity ~/
IntensB /~ B-component of intensity */
normt /~ length of (tx,ty,tz) ~/
innproc /~ inner product of the vector ~/
/~ from light source to the ~/
/t vertex and unit normal */
shadcos /~ cos term for specular reflection ~/
Lnum. /* number o light sources h/
light /~ temporary value ~/
temp /~ temporary value ~/

mfinitel : /P code entered when "Begin Shaded */
/~ Polygon ~Multiple Light Source~
/* command is encountered */
a. read in data b. if it is a GESP (end shaded polygon) command, output the command, and exit, /~ read in x,y,z coor of the vertex, and calculate ~/

~l3~82g~

/~ vector from the vertex to viewpoint which is ~/
/* used in specular reflection calculation for ~/
/* viewpoint at a finite distance */
c. read in x; and tx <-- vx - x , d. outpu~ x ~o second Floating Point Processor e. read in y; and ty< -- vy - y , f. output y ~o second Floating Point Processor0 g. read in z, and tz< -- vz - z , h. output z to second Floating Point Processor i. read in a: /~ normal at vertex ~/
j. output a to second Floating Point Processor k. read in b, 1. output b to second Floating Point Proce~sor m. read in c.
n. output c to second Floating Point Processor /~ The following stepq are used in specular ~/
/* reflection calculation for viewpoint at ~/
/~ a finite distance ~/
1. distance between viewpoint and the vertex a. normt< -- tx~tx ~ ty~ty ~ tz*tz b. normt< -- sqrt(normt3 2. Ambient light R G-B components a. IntensR~ -- kaR~IaR
b. IntensG< -- kaG*IaG
c. Intens8~ -- kaB~IaB
3.
For j = 1 to Lnum a. diffuse term 13 vector from the vertex to the light source:
wx< ---- uxj -- x, wy < ---- uy j -- y wz< -- uzj - z , ~3¢9~

2) The inner product of the vector from vertex to light source and the unit normal innproc~ -- wx~a ~ wy*b + wz~c 3) distance between the light source and the vertex a) normw< -- wx*wx + wy*wy + wz*wz b) normw< -- sqrt(normw) 4) ~onstant used in the calculation for intensity a) if innproc< -- 0 then light~ ~- 0, 10 b~ else ~ light ~ -- innproc/normw S) calculate Intensity (diffuse reflection) Inten~R~ -- IntensR + IsoujR~kdR~ligh~
6) calculate Intensity tdiffuse reflectlon) IntensG< - IntensG + IsoujG~kdG~light 7) calculate Intensity (diffuse reflection) IntensB~ - IntensE~ + IsoujB~kdB~liqht b. calculate Intensity ~specular reflection) 1) reflection direction : temp <-- 2~innproc rx< -- temp~a - wx ry~ -~ temp*b - wy rz< -- temp*c - wz 2) norm of reflection direction normr< -- rx~rx + ry*ry + rz*rz normr< -- sqrt~normr) 3) calculate the cos of the reflection angle tempc -- tx*rx ~ ty*ry ~ tz~rz a) if temp < 0 , then shadcos< -- 0 b) else i. temp< -- temp/(normr~normt).
ii. shadcos< -- temp**Xn iii. shadcos< -- ks*shadcos 4) calculate the specular reflection IntensR< -- IntensR + IsoujR*shadcos ~3~8~

S) calculate the specular reflection.
IntensG~ -- XntensG ~ IsoujG~sh~dCos 6) calcula~e the specular reflection IntensB< -- IntensB + IsoujB*shadco~
4. output IntensR to second Floating Point Processos 5. output Inten~G to second Floating Point Processor . 6. output Intensa to second Floating Point Processor 7. goto mfinitel The pseudocode for the second floating point processor in this cage ¢multiple light source~ --viewpoint at a finite distance) is identical to that listed abo~e for the first floating point processor, with the fo;l~wing differences:
1. The reflection dlrec:tion (direction to the viewpoint~ calculated.by the first floating point proce~sor is passed a3 an input to the second floating point processor so that the second floating point processor does not need to calculate it.
2. The ligh~ cont~ibutions due to the even-numbered light sources (#2,#4,#6, etc.) are computed rather than th~se due ~o the odd-numbered light sources, and 3. The contributions due to the even-numbered light sources are added to the contributionR of the odd-numbered light source3 (which were passed from the first floating point processor to the second as inputs) to obtain the total light intensity at each vertex.
~: 30 MULTIPLE LIG~T SOURCES -- VlEWPOINT AT INFINITY
; Once again, the ca~e where the.viewpoint is at infinity ~parallel projection) differs from the ca~e whe~ th~ ~iewpoint i~ at a inite dlstance, only ln the treatment of the reflection vector during the ~l3~

calculation of the light due to ~pecular reflection.
In the case where the viewpoint is at infinity, only the z-component i~ u~ed since the viewpoint direction is given by (0, O, 1) in the parallel projection case.
PROCEDURE TO PERFORM PROJECTION AND MAPPING
TRANSFORMATIONS -- THE THIRD PROCESSOR
This stage of the pipeline perform~ the following two steps on each polygon vertex in the order listed:
1. An input vertex is transformed from the 3D viewing space to another 3D space according to a "projection" transforma~ion that has the following characteristics:
a. The x and y coordinates produced by the transformation are the projection of the vertex onto the viewing plane;
b. The transformation preserves planarity (i.e., applying the transformation to vert_ces that are coplanar in the 3D viewing space yields vertices that are copIanar in the output 3D
space.
2. The transformed vertex is then mapped to the viewport by means of a "mapping" transformation.
In the case of perspective projection (viewpoint at a finite distance from the object), the projection transformation is accomplished by means of the following formulae:
_, x 30 Xpro j = ----`: 1 +
d , 13~4B24 Yproj = 1 Z
+ d Zproj = ~ - - -d where ~Xproj, ~proj) i5 the projection of (x,y,z) from the 3D viewi~g qpace onto the viewing plane, and d is the distance from the viewpoint to the viewing plane.
The projection function for z is chosen to ensure that the projection of a plane in the 3D viewing space is still a plane, and that Zpro, increases as ~ increases.
The depth cueing reference planes z = Pf and z - Pb 10 described in the next section are defined in the output space of the projection transformation.
In the ca~e of par~llel projection (viewpoint at infinity), the viewing space ~oordinate values themselves are used a~ the projection values:
lS . Xproj = x Yproj = y 2proj = z.
Whether parallel or perspective, the projection transformation is followed by the mapping transformation, which i5 accomplished by means of the following formulae: -Xscreen ~-- XVmin + Rx~(X - XCmin) Yscreen <-- YVmin + Ry~(Y - YCmin) where XVmin is the left boundary of the viewport (the - .

~L3~8~4 area of the screen which will be u~ed to display the image), XCmin is the left boundary of the clipping volume (the region of viewing space which will be mappe~ to the viewport), YVmin is the lower boundary of the viewport, YCmin is the lower boundary of the clipping volume, and Rx and Ry are the X and Y ratios of the size of the viewport to the size of the clipping volume.
Pseudocode~for the projection and mapping procedure follow~:
Procedure Projection and Mapping .

Input:
x /~ Vertex viewing x coordinate ~/
. 1~ y /* Vertex viewing y coordinate ~/ ' z /~ Vertex viewing z coordinate ~/ .
IntensR /~ Vertex color (red) ~/
In~ensG /~ Vertex eolor (green) */
IntensB /* Vertex color (bLue) ~/
.
Output:
Xs /~ Vertex screen x coordinate ~/
Ys /~ Vertex qcreen y coordinate */
Zproj /~ Vertex projected z coordinate ~/
25 IntensR /~ ~ertex color (red); unchanged */
IntensG /~ Vertex color (green), unchanged ~/
IntensB /~ Vertex color (blue); unchanged ~/

Constants:
Rx /~ Ratio of x size of viewport to ~/
/~ clipping window ~/
Ry /~ Ratio of y size of viewport to */
/~ clipping window ~/
XVmin /t left edge of viewport ~/
XCmin /* left edge of clipping window */

13~Z~

YVmin /~ lower edge of viewport ~/
YCmin /~ lower edge of clipping windo~ ~/

Variables:
Aux /* temporary value ~/

proj and map: , l. read in nex~ piece of data ' 2. if it iq a GESP (end shaded polygon) command, output'the command, and exit, 3. Read in x and store in Xproj, /~ compute Xproj for parallel case ~/
4. Read in y and ~tore in Yproj, /~ compute Yproj for parallel case */
.15 5. Read in z and ~tore' in ZprojJ
' /~ compute Zproj for parallel case ~/
6. if viewpoint is at infinity (parallel,projection mode), goto mapping;
7. Aux <-- Zproj/d /* compute Zproj for perspect. case ~/
8. Aux <-- Aux + l
-9. Zproj <-- (-l)/Aux lO. Xproj <-- x~Zproj ./~ compute Xproj for perspect. case ~/
ll. Yproj <-~ y~Zpro;
/~ compute Yproj for perspect. case */

mapping:
12. Aux <-- Xproj ~ XCmin /~ Computation of X5, the screen X value 13. Aux < - Aux * Rx 14. xs ~-- Aux ~ XVmin 15. Ou~pu~ Xs.
16. Aux < - Yproj - YCmin /~ Computation of Ys, the screen Y value */

.- :

~3~

17. Aux< -- Aux ~ Ry 18. Yx~ -- Aux ~ YVmin 19. Output Y~.
20. Output Zproi.
~1. Read in IntensR.
22. Output Inten~.
23. Read in IntensG.
24. Output IntensG.
25. ~ead in IntensB., 26. Output IntensB.
27. goto proj and_map.

PROCEDURE TO PERFORM DEPTH CUEING, COLOR MAPPING
A~ COLOR CLIPPING -- THE FOURT~ PROCESSOR
The following conceptual steps are implemented in `
thi~ processor:
Depth cueing (the changing of the light intensity at a point as a function of the point's distance from the viewer),0 . Color mapping (the process by which the light intensities calculated as a result of the lighting and depth cueing processes are mapped to the dynamic color capabilities of the sub~equent display system), and5 . Color clipping (the process by which intensities that exceed the maximum intensity supported by the dynamic color capabilities of the display ~ystem are replaced with the maximu~ intensity value).

DEPT~ CUEING
In this proceYs, the RGB intensities calculated by the lighting model at a vertex are blended with a speci~led color value as a vi~ual cue indicating to the worXstation user wh~ch portlon~ of5 the image on the screen are furthest from the ~3¢a~82~

viewpoint, This is done by means o~ a mixing function which varie3 the output color intensity as a function o~ the z coordinate of the vertex:
Ci - w*Intensi ~ w)~Cdi Where:
Intensi is the component of the input color at the vertex:
. Cdi is the component of the color intensity with which b:Lending i5 done; and . Ci is the component of the output color intensity at the vertex:
for i = R, G, and ~, and Pf is the z value of the front reference plane, . Pb is the ~ value o th~ back reference plane;
'15 . Sf is the front scale factor, . Sb is the back scale factor, and w, the mixing function variable, is ~efined as a function of Zproj, the ~ value ~reated by the projection and mapping stage, according ~o the following formulae:
Sf - Sb w = Sb + (Zproj-Pb3 ; Pf ~= Zproj >-Pb Pf - Pb w - Sf : Zproj >Pf w = Sb : Zproj <Pb COLOR MAPPING
The color intensities calculated by the lighting and depth cueing processes are independent of the dynamic range of color intensities that can be displayed by the display hardware subsequent in the system. In order to couple the lighting'~odel calculation ~tages to the display hardware following it in the graphics workstation, this step applies a linear transformation to each individual color component to 3L3~4829~

match the dynamic color range available in ~he ~isplay hardware. The new color values, C'i ~i = R, G, and B) are obtained by means of the ormulae:
C'i - ai Ci ~ bi (i - R, G, B) where the ai' 9 and bi' 5 are the color mapping parameters appropriate for the display hardware.

COLOR C~IPPING
; Even after the mapping step, the intensities calculated by the lighting stage may range higher than the maximum intensity supported by the display hardware. The purpose of the color clipping step is to maXe sure the color inten~ities passed to the display hardware do not exceed the maximum allowed vale. This r moves saturation problem~ caused by large color values created during the lighting caiculation stages.
The new color values, C"i ~i = R, G, and B), are given by the formulae:
C"i = MIN(Bhi, C'i) (i = ~, G, B) where Bhi is the maximum allowable value for color component i, and C'1 is the output color component ~rom the color mapping step. In addition, this step con~erts the floating poin~ representation of the output color intensity to the integer representation needed by the display hardware.

IMPLEMENT~TION
Since the formulae used in the depth cueing and color mapping steps are linear equations, they can be combined into one equation to reduce the number of calculations that must be per ormed in this stage of the pipeline. To this end, the following parameters of the combined equation are calculated in advance, that i5, at the time that the control parameters for these step~ are specified:

~L3~4~
.

Qfi = ai~Sf Qbi = ai*Sb Ai = ai~F
Bi = ai~(Sb - F*Pb) hi = ai~Cdi + bi for i = R, G, and B, where Sf - Sb F = - .
; Pf - Pb Having calculated the preceding parameters in advance, the following ~equence of operations is : performed:
1. Compute qi as a linear func~ion of z, the depth cueing parameters, and the d:isplay hardware 1~ paramete~s:
: qi = Ai~Zpro; ~ Bi : Pf = Zproj~ = Pb (i = R, G, B) qi = Qfi ; Zproj , Pf qi = Qbi , Zproj c Pb Ai, 8i, Qfi, and Qbi being constants computed as mentioned above.
2. Compute the mapped depth-cued color intensity C'i as a function of the input color Ci and previously computed parameter~ qi and hi:
. C'i = qi (Ci Cdi) ~ hi (i = R, G, B) 3I Clip the mapped depth-cued color intensity C'i and convert it to integer repre~entation for the subsequent display hardware's use:
C"i = T~UNC(MIN(Bhi,C'i)) ~i= R, G~ B) where the function TRUNC (for TRUNCation) converts the floating point value to integer representation.
Pseudocode suitable for implementing the depth cueing, color mapping and color clipping procedures follows:

~3~4~%~

Procedure Depth Color Input:
X~ /* Vertex screen coordinate */
Ys /~ Vertex screen coordinate ~/
Zproj /~ Verkex projected z coordinate ~/
IntensR /~ Vertex ~olor ~red) ~/
IntensG /* Vertex color (green) ~/
~IntensB /~ Vertex color ~blue) ~/

output Xs /~ Vertex screen coordinate ~/
Ys /~ Vertex screen coordinate ~/
Zproj /~ Vertex projec~ed z coordinate */
C"R /~ Vertex color (red) */
15 C"G /~ Vertex color ~green) ~/
C"B /* Vertex color (blue) */

Constants:
Qf~ /~ constant computed when control ~1 /~ parameters loaded ~/
QfG /~ constant computed when control ~/
/~ parameters loaded ~/
QfB /* co~tant computed when control ~/
/~ parameters loaded ~/
Q~R /~ constant computed when control */
/~ parameters loaded ~/
QbG /~ constant computed when control ~/
/~ parameters loaded ~/
30QbB /~ constant computed when control ~/
/~ parameters loaded ~/

AR /~ constant computed when con~rol ~/
/~ parameters loaded */
AG /~ constant computed when control ~/
~5 /* parameters loaded ~/

~3~2~

AB /* constant computed when control ~/
/~ parameters loaded ~/
BR /~ constant computed when con~rol ~/
/~ parameters loaded ~/
BG /~ constant computed when control */
t* parameter~ loaded ~/
3B /~ constant ~omputed when control ~/
/~ parameters loaded ~/
' hR /~ con~tant computed when control ~/
/* parameters loaded ~/
hG /~ constant computed when control ~/
/~ parameters loaded ~/
hB /~ constant computed when control ~/
/~ parameter~ loaded */
BhR. /* maximum allowable red intensity */
BhG /~ maximum allowable green intensity f/
BhB /~ maximum allowable blue intensity */
Pf /* Z-value of front reference plane ~/
Pb /~ Z-value of back reference plane ~/
ZO
Variables:
Aux /~ temporary value ~/

Program flow:
1. For each ver~ex DO
2. Read in Xs.
- 3. Output X~.
4. Read in Ys.
5. Output Ys.
6. Read in Zproj.
7. Output Zproj.
For i = R, G, B DO
8. Read in Ci 9. Aux - qi /* Computation of qi as a func~ion ~/
/~ of ZProj ~/

, ~. : :: ~

~31~48Z~
10. Aux 5 Aux ~Ci-Cdi~ ~ hi /*Computation of C'i~/
11. Aux ~ MIN ~Bhi,Aux)
12. Aux = Truncate Aux
13. Output Aux
14. Enddo
15. Enddo From the preceding detailed description it will be apparent that a new lighting model processin~
; system has been developed which exhibits high 10 throughput, affords improved-interactivity and higher image quality, and fulfills all of the other objects set forth hereinabove. Although various specific embodiments have been depicted and described, it will be evident to those skilled in this art that numerous lS modifications, ~ubstitutions, addi~ions and other changes may be made without departing from tXe principles of the invention, the cope of which is defined by the claims appended hereto.

Claims (20)

The embodiments of the invention in which an exclusive property or privilege is claimed are defined as follows:
1. Apparatus for processing lighting model information in order to display a shaded image of an object upon a viewing screen of a computer graphics display system, wherein an object is represented in viewing space by a mesh of polygons, each polygon being defined by a set of vertices with the location of each of said vertices in viewing space being known, the apparatus comprising:
a pipeline arrangement of multiple, identical floating point processors, which arrangement receives data representing coordinates in viewing space of vertices of a polygon and a normal at each of the vertices of the polygon, and calculates therefrom coordinates on the viewing screen of the vertices, and screen color intensity values associated with each of said vertices based upon a specified lighting model.
2. The apparatus of claim 1 wherein said pipeline arrangement comprises:
at least one processor for calculating for each of said vertices a first set of color intensity values due to ambient lighting and diffuse and specular reflection effects;
a second processor for receiving and processing said first set of color intensity values to provide for depth cueing, color mapping and color clipping; and a third processor for performing a projection.
transformation and for mapping the coordinates of each of the vertices from viewing space to screen space.
3. The apparatus of claim 1 further comprising partitioning means for dynamically allocating computational tasks among various of the processors in accordance with different lighting models, in a fashion designed to maintain computational workload balance between said various processors.
4. Apparatus for processing lighting model information in order to display a shaded image of an object on a viewing screen of a computer graphics display system, comprising:
multiple floating point processing stages for calculating screen vertex light intensity values due to ambient lighting of an object and surface reflection from said object, said stages being connected and operated in a pipeline arrangement.
5. The apparatus of claim 4 wherein the processing stages are all comprised of processing elements of identical hardware configuration.
6. The apparatus of claim 5 wherein each of said processing elements comprises a single identical VLSI chip.
7. The apparatus of claim 5 wherein each processing element comprises:
an input FIFO, an output FIFO, floating point arithmetic processing means connected between said input FIFO and output FIFO, data storage means for interfacing with said arithmetic processing means, control logic and control program storage means, and sequencer means for controlling the operation of said arithmetic processing means and data storage means in accordance with said control logic and control program.
8. The apparatus of claim 7 wherein:
said data storage means comprises a bank of registers for storing data; and said arithmetic processing means comprises a floating point multiplier, means for calculating inverses and square roots, and a floating point adder, the adder being connected to said multiplier in such a way as to serve as an accumulator for the multiplier.
9. The apparatus of claim 5 wherein at least one of said processing stages comprises a plurality of said processing elements connected in parallel.
10. The apparatus of claim 5 wherein an object is represented in viewing space by a mesh of polygons, each polygon being defined by a set of vertices with the location of each of said vertices in viewing space being known, and wherein the pipeline arrangement of multiple processing stages receives data representing x, y and z coordinates in viewing space of vertices of a polygon and x, y and z components of a normal at each of the vertices of the polygon, and calculates therefrom x and y coordinates on the viewing screen of the vertices and screen color intensity values associated with each of said vertices based upon a specified lighting model.
11. The apparatus of claim 10 wherein said screen color intensity values comprise a red component intensity value, a green component intensity value and a blue component intensity value for each of said vertices.
12. The apparatus of claim 5 wherein said pipeline arrangement of multiple processing stages comprises:
first and second processing stages for jointly calculating vertex intensity values due to ambient light and diffuse and specular reflection;
and a third processing stage for receiving the intensity values jointly calculated by said first and second processing stages and for further processing said intensity values to provide for depth cueing.
13. The apparatus of claim 12 further comprising a fourth processing stage for mapping vertices onto a viewing screen.
14. The apparatus of claim 13 wherein said third processing stage further comprises means for performing color mapping and color clipping.
15. The apparatus of claim 13 wherein said first and second processing stages are selectively programmable to accommodate a single light source model and a multiple light source model, with the intensity value calculations being partitioned so as to maintain a computational work load balance between said first and second processing stages.
16. The apparatus of claim 15 wherein the first processing stage calculates ambient lighting and diffuse reflection effects, and the second processing stage calculates specular reflection effects when said first and second processing stages are programmed to accommodate a single light source model.
17. The apparatus of claim 16 wherein said first processing stage calculates ambient lighting and diffuse and specular reflection effects for odd light sources, and the second processing stage calculates ambient lighting, and diffuse and specular reflection effects for even light sources when said first and second processing stages are programmed to accommodate a multiple light source model.
18. The apparatus of claim 10 in combination with:
shading processing means for receiving and processing said screen color intensity values to calculate color intensities of pixels interior to visible polygons, video pixel memory means connected to said shading processing means for storing color intensity information, and display monitor means connected to said video pixel memory means for displaying a shaded image of an object.
19. A method of performing lighting model calculations in a computer graphics display system to derive screen vertex light intensity values, the method comprising the steps of:
providing multiple identical floating point processors capable of performing such lighting model calculations;
connecting and operating said multiple processors is a pipeline arrangement, and partitioning the lighting model calculations among said multiple processors so as to substantially balance computational workload between said processors.
20. The method of claim 19 wherein said partitioning step comprises dynamically partitioning the lighting model calculations in a fashion designed to maintain substantial computational workload balance between said processors irrespective of the number of light sources being modeled.
CA000581529A 1987-10-30 1988-10-27 Pipelined lighting model processing system for a graphics workstation's shading function Expired - Lifetime CA1304824C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US07/115,467 US4866637A (en) 1987-10-30 1987-10-30 Pipelined lighting model processing system for a graphics workstation's shading function
US07/115,467 1987-10-30

Publications (1)

Publication Number Publication Date
CA1304824C true CA1304824C (en) 1992-07-07

Family

ID=22361599

Family Applications (1)

Application Number Title Priority Date Filing Date
CA000581529A Expired - Lifetime CA1304824C (en) 1987-10-30 1988-10-27 Pipelined lighting model processing system for a graphics workstation's shading function

Country Status (5)

Country Link
US (1) US4866637A (en)
EP (1) EP0314341B1 (en)
JP (1) JPH0731741B2 (en)
CA (1) CA1304824C (en)
DE (1) DE3853336T2 (en)

Families Citing this family (132)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9405914D0 (en) * 1994-03-24 1994-05-11 Discovision Ass Video decompression
EP0576749B1 (en) * 1992-06-30 1999-06-02 Discovision Associates Data pipeline system
US5222203A (en) * 1989-01-20 1993-06-22 Daikin Industries, Ltd. Method and apparatus for displaying translucent surface
JP2774627B2 (en) * 1989-12-28 1998-07-09 株式会社日立製作所 Image display method and apparatus
SE464265B (en) * 1990-01-10 1991-03-25 Stefan Blixt Graphics Processor
DE69129995T2 (en) * 1990-01-23 1998-12-24 Hewlett Packard Co Decentralized processing device and method for use in global reproduction
JPH04122544A (en) * 1990-05-14 1992-04-23 Mitsubishi Electric Corp Cutting simulation method for lathe
JPH0476680A (en) * 1990-05-14 1992-03-11 Mitsubishi Electric Corp Graphic display method for rotary body
EP0459761A3 (en) * 1990-05-31 1993-07-14 Hewlett-Packard Company Three dimensional computer graphics employing ray tracking to compute form factors in radiosity
US5253339A (en) * 1990-07-26 1993-10-12 Sun Microsystems, Inc. Method and apparatus for adaptive Phong shading
GB9026232D0 (en) * 1990-12-03 1991-01-16 Ige Medical Systems Image processing system
US5268996A (en) * 1990-12-20 1993-12-07 General Electric Company Computer image generation method for determination of total pixel illumination due to plural light sources
TW225595B (en) * 1991-09-03 1994-06-21 Gen Electric
US5706415A (en) * 1991-12-20 1998-01-06 Apple Computer, Inc. Method and apparatus for distributed interpolation of pixel shading parameter values
US5388841A (en) 1992-01-30 1995-02-14 A/N Inc. External memory system having programmable graphics processor for use in a video game system or the like
WO1993020529A1 (en) * 1992-03-31 1993-10-14 Seiko Epson Corporation System and method for generating 3d color images with simulated light sources
US5768561A (en) * 1992-06-30 1998-06-16 Discovision Associates Tokens-based adaptive video processing arrangement
US6034674A (en) * 1992-06-30 2000-03-07 Discovision Associates Buffer manager
US6330665B1 (en) 1992-06-30 2001-12-11 Discovision Associates Video parser
US6067417A (en) * 1992-06-30 2000-05-23 Discovision Associates Picture start token
US6435737B1 (en) * 1992-06-30 2002-08-20 Discovision Associates Data pipeline system and data encoding method
US6079009A (en) * 1992-06-30 2000-06-20 Discovision Associates Coding standard token in a system compromising a plurality of pipeline stages
US5809270A (en) * 1992-06-30 1998-09-15 Discovision Associates Inverse quantizer
US6047112A (en) * 1992-06-30 2000-04-04 Discovision Associates Technique for initiating processing of a data stream of encoded video information
US5603012A (en) * 1992-06-30 1997-02-11 Discovision Associates Start code detector
US7095783B1 (en) 1992-06-30 2006-08-22 Discovision Associates Multistandard video decoder and decompression system for processing encoded bit streams including start codes and methods relating thereto
US6417859B1 (en) 1992-06-30 2002-07-09 Discovision Associates Method and apparatus for displaying video data
US6112017A (en) * 1992-06-30 2000-08-29 Discovision Associates Pipeline processing machine having a plurality of reconfigurable processing stages interconnected by a two-wire interface bus
EP0578950A3 (en) * 1992-07-15 1995-11-22 Ibm Method and apparatus for converting floating-point pixel values to byte pixel values by table lookup
US5357599A (en) * 1992-07-30 1994-10-18 International Business Machines Corporation Method and apparatus for rendering polygons
US5821940A (en) * 1992-08-03 1998-10-13 Ball Corporation Computer graphics vertex index cache system for polygons
US5315701A (en) * 1992-08-07 1994-05-24 International Business Machines Corporation Method and system for processing graphics data streams utilizing scalable processing nodes
GB2271257A (en) * 1992-10-02 1994-04-06 Canon Res Ct Europe Ltd Processing image data
TW241196B (en) * 1993-01-15 1995-02-21 Du Pont
US5606650A (en) * 1993-04-22 1997-02-25 Apple Computer, Inc. Method and apparatus for storage and retrieval of a texture map in a graphics display system
US5402533A (en) * 1993-04-22 1995-03-28 Apple Computer, Inc. Method and apparatus for approximating a signed value between two endpoint values in a three-dimensional image rendering device
IL109462A0 (en) * 1993-04-30 1994-07-31 Scitex Corp Ltd Method for generating artificial shadow
GB2293079B (en) * 1993-05-10 1997-07-02 Apple Computer Computer graphics system having high performance multiple layer z-buffer
US5583974A (en) * 1993-05-10 1996-12-10 Apple Computer, Inc. Computer graphics system having high performance multiple layer Z-buffer
US5974189A (en) * 1993-05-24 1999-10-26 Eastman Kodak Company Method and apparatus for modifying electronic image data
DE69418646T2 (en) * 1993-06-04 2000-06-29 Sun Microsystems Inc Floating point processor for a high-performance three-dimensional graphics accelerator
US5805914A (en) * 1993-06-24 1998-09-08 Discovision Associates Data pipeline system and data encoding method
AU6665194A (en) * 1993-08-24 1995-03-21 Taligent, Inc. Object oriented shading
US5613052A (en) * 1993-09-02 1997-03-18 International Business Machines Corporation Method and apparatus for clipping and determining color factors for polygons
US5742292A (en) * 1993-10-29 1998-04-21 Kabushiki Kaisha Toshiba System and method for realistically displaying images indicating the effects of lighting on an object in three dimensional space
CA2145379C (en) * 1994-03-24 1999-06-08 William P. Robbins Method and apparatus for addressing memory
CA2145361C (en) * 1994-03-24 1999-09-07 Martin William Sotheran Buffer manager
CA2145365C (en) * 1994-03-24 1999-04-27 Anthony M. Jones Method for accessing banks of dram
US5808627A (en) * 1994-04-22 1998-09-15 Apple Computer, Inc. Method and apparatus for increasing the speed of rendering of objects in a display system
US6217234B1 (en) * 1994-07-29 2001-04-17 Discovision Associates Apparatus and method for processing data with an arithmetic unit
GB9417138D0 (en) 1994-08-23 1994-10-12 Discovision Ass Data rate conversion
US5764228A (en) * 1995-03-24 1998-06-09 3Dlabs Inc., Ltd. Graphics pre-processing and rendering system
US5835096A (en) * 1995-03-24 1998-11-10 3D Labs Rendering system using 3D texture-processing hardware for accelerated 2D rendering
US5798770A (en) * 1995-03-24 1998-08-25 3Dlabs Inc. Ltd. Graphics rendering system with reconfigurable pipeline sequence
US5764243A (en) * 1995-03-24 1998-06-09 3Dlabs Inc. Ltd. Rendering architecture with selectable processing of multi-pixel spans
US6025853A (en) * 1995-03-24 2000-02-15 3Dlabs Inc. Ltd. Integrated graphics subsystem with message-passing architecture
US5717715A (en) * 1995-06-07 1998-02-10 Discovision Associates Signal processing apparatus and method
JPH096424A (en) * 1995-06-19 1997-01-10 Mitsubishi Electric Corp Cad and cam device and working simulation method
US6643765B1 (en) 1995-08-16 2003-11-04 Microunity Systems Engineering, Inc. Programmable processor with group floating point operations
US6111584A (en) * 1995-12-18 2000-08-29 3Dlabs Inc. Ltd. Rendering system with mini-patch retrieval from local texture storage
US5739819A (en) * 1996-02-05 1998-04-14 Scitex Corporation Ltd. Method and apparatus for generating an artificial shadow in a two dimensional color image
DE19606357A1 (en) * 1996-02-12 1997-08-14 Gmd Gmbh Image processing method for the representation of reflecting objects and associated device
JP3226153B2 (en) * 1996-03-18 2001-11-05 シャープ株式会社 Multimedia data display device
US5745125A (en) * 1996-07-02 1998-04-28 Sun Microsystems, Inc. Floating point processor for a three-dimensional graphics accelerator which includes floating point, lighting and set-up cores for improved performance
JP3387750B2 (en) * 1996-09-02 2003-03-17 株式会社リコー Shading processing equipment
US5854632A (en) * 1996-10-15 1998-12-29 Real 3D Apparatus and method for simulating specular reflection in a computer graphics/imaging system
US6016149A (en) * 1997-06-30 2000-01-18 Sun Microsystems, Inc. Lighting unit for a three-dimensional graphics accelerator with improved processing of multiple light sources
US6014144A (en) * 1998-02-03 2000-01-11 Sun Microsystems, Inc. Rapid computation of local eye vectors in a fixed point lighting unit
US6650327B1 (en) 1998-06-16 2003-11-18 Silicon Graphics, Inc. Display system having floating point rasterization and floating point framebuffering
US6977649B1 (en) 1998-11-23 2005-12-20 3Dlabs, Inc. Ltd 3D graphics rendering with selective read suspend
US6417858B1 (en) * 1998-12-23 2002-07-09 Microsoft Corporation Processor for geometry transformations and lighting calculations
US6181352B1 (en) 1999-03-22 2001-01-30 Nvidia Corporation Graphics pipeline selectively providing multiple pixels or multiple textures
US6333744B1 (en) * 1999-03-22 2001-12-25 Nvidia Corporation Graphics pipeline including combiner stages
WO2001029768A2 (en) 1999-10-18 2001-04-26 S3 Incorporated Multi-stage fixed cycle pipe-lined lighting equation evaluator
US6411301B1 (en) 1999-10-28 2002-06-25 Nintendo Co., Ltd. Graphics system interface
US6452600B1 (en) 1999-10-28 2002-09-17 Nintendo Co., Ltd. Graphics system interface
US6618048B1 (en) 1999-10-28 2003-09-09 Nintendo Co., Ltd. 3D graphics rendering system for performing Z value clamping in near-Z range to maximize scene resolution of visually important Z components
US6597357B1 (en) * 1999-12-20 2003-07-22 Microsoft Corporation Method and system for efficiently implementing two sided vertex lighting in hardware
US6859862B1 (en) 2000-04-07 2005-02-22 Nintendo Co., Ltd. Method and apparatus for software management of on-chip cache
US6857061B1 (en) 2000-04-07 2005-02-15 Nintendo Co., Ltd. Method and apparatus for obtaining a scalar value directly from a vector register
US6724394B1 (en) 2000-05-31 2004-04-20 Nvidia Corporation Programmable pixel shading architecture
US6664963B1 (en) * 2000-05-31 2003-12-16 Nvidia Corporation System, method and computer program product for programmable shading using pixel shaders
US7119813B1 (en) 2000-06-02 2006-10-10 Nintendo Co., Ltd. Variable bit field encoding
US6788302B1 (en) 2000-08-03 2004-09-07 International Business Machines Corporation Partitioning and load balancing graphical shape data for parallel applications
US6707458B1 (en) 2000-08-23 2004-03-16 Nintendo Co., Ltd. Method and apparatus for texture tiling in a graphics system
US6811489B1 (en) 2000-08-23 2004-11-02 Nintendo Co., Ltd. Controller interface for a graphics system
US7002591B1 (en) 2000-08-23 2006-02-21 Nintendo Co., Ltd. Method and apparatus for interleaved processing of direct and indirect texture coordinates in a graphics system
US7034828B1 (en) 2000-08-23 2006-04-25 Nintendo Co., Ltd. Recirculating shade tree blender for a graphics system
US6867781B1 (en) 2000-08-23 2005-03-15 Nintendo Co., Ltd. Graphics pipeline token synchronization
US6609977B1 (en) 2000-08-23 2003-08-26 Nintendo Co., Ltd. External interfaces for a 3D graphics system
US7196710B1 (en) 2000-08-23 2007-03-27 Nintendo Co., Ltd. Method and apparatus for buffering graphics data in a graphics system
US7184059B1 (en) 2000-08-23 2007-02-27 Nintendo Co., Ltd. Graphics system with copy out conversions between embedded frame buffer and main memory
US7134960B1 (en) 2000-08-23 2006-11-14 Nintendo Co., Ltd. External interfaces for a 3D graphics system
US6636214B1 (en) 2000-08-23 2003-10-21 Nintendo Co., Ltd. Method and apparatus for dynamically reconfiguring the order of hidden surface processing based on rendering mode
US6980218B1 (en) 2000-08-23 2005-12-27 Nintendo Co., Ltd. Method and apparatus for efficient generation of texture coordinate displacements for implementing emboss-style bump mapping in a graphics rendering system
US6606689B1 (en) 2000-08-23 2003-08-12 Nintendo Co., Ltd. Method and apparatus for pre-caching data in audio memory
US6999100B1 (en) 2000-08-23 2006-02-14 Nintendo Co., Ltd. Method and apparatus for anti-aliasing in a graphics system
US6639595B1 (en) 2000-08-23 2003-10-28 Nintendo Co., Ltd. Achromatic lighting in a graphics system and method
US6664958B1 (en) 2000-08-23 2003-12-16 Nintendo Co., Ltd. Z-texturing
US6700586B1 (en) 2000-08-23 2004-03-02 Nintendo Co., Ltd. Low cost graphics with stitching processing hardware support for skeletal animation
US6825851B1 (en) 2000-08-23 2004-11-30 Nintendo Co., Ltd. Method and apparatus for environment-mapped bump-mapping in a graphics system
US6937245B1 (en) 2000-08-23 2005-08-30 Nintendo Co., Ltd. Graphics system with embedded frame buffer having reconfigurable pixel formats
US6580430B1 (en) 2000-08-23 2003-06-17 Nintendo Co., Ltd. Method and apparatus for providing improved fog effects in a graphics system
US6664962B1 (en) 2000-08-23 2003-12-16 Nintendo Co., Ltd. Shadow mapping in a low cost graphics system
US7538772B1 (en) 2000-08-23 2009-05-26 Nintendo Co., Ltd. Graphics processing system with enhanced memory controller
US6697074B2 (en) * 2000-11-28 2004-02-24 Nintendo Co., Ltd. Graphics system interface
FR2826769B1 (en) * 2001-06-29 2003-09-05 Thales Sa METHOD FOR DISPLAYING MAPPING INFORMATION ON AIRCRAFT SCREEN
US6781594B2 (en) * 2001-08-21 2004-08-24 Sony Computer Entertainment America Inc. Method for computing the intensity of specularly reflected light
US7003588B1 (en) 2001-08-22 2006-02-21 Nintendo Co., Ltd. Peripheral devices for a video game system
US7046245B2 (en) * 2001-10-10 2006-05-16 Sony Computer Entertainment America Inc. System and method for environment mapping
US7451457B2 (en) 2002-04-15 2008-11-11 Microsoft Corporation Facilitating interaction between video renderers and graphics device drivers
US7219352B2 (en) * 2002-04-15 2007-05-15 Microsoft Corporation Methods and apparatuses for facilitating processing of interlaced video images for progressive video displays
US7308139B2 (en) * 2002-07-12 2007-12-11 Chroma Energy, Inc. Method, system, and apparatus for color representation of seismic data and associated measurements
US7006090B2 (en) * 2003-02-07 2006-02-28 Crytek Gmbh Method and computer program product for lighting a computer graphics image and a computer
US7106326B2 (en) * 2003-03-03 2006-09-12 Sun Microsystems, Inc. System and method for computing filtered shadow estimates using reduced bandwidth
JP2005057738A (en) * 2003-07-18 2005-03-03 Canon Inc Signal processing apparatus, signal processing method, and program
US7139002B2 (en) * 2003-08-01 2006-11-21 Microsoft Corporation Bandwidth-efficient processing of video images
US7643675B2 (en) * 2003-08-01 2010-01-05 Microsoft Corporation Strategies for processing image information using a color information data structure
US7158668B2 (en) 2003-08-01 2007-01-02 Microsoft Corporation Image processing using linear light values and other image processing improvements
US8133115B2 (en) 2003-10-22 2012-03-13 Sony Computer Entertainment America Llc System and method for recording and displaying a graphical path in a video game
US20060071933A1 (en) 2004-10-06 2006-04-06 Sony Computer Entertainment Inc. Application binary interface for multi-pass shaders
US7636126B2 (en) 2005-06-22 2009-12-22 Sony Computer Entertainment Inc. Delay matching in audio/video systems
US7965859B2 (en) 2006-05-04 2011-06-21 Sony Computer Entertainment Inc. Lighting control of a user environment via a display device
US7880746B2 (en) 2006-05-04 2011-02-01 Sony Computer Entertainment Inc. Bandwidth management through lighting control of a user environment via a display device
US7940266B2 (en) * 2006-10-13 2011-05-10 International Business Machines Corporation Dynamic reallocation of processing cores for balanced ray tracing graphics workload
EP2080104A4 (en) * 2006-11-10 2010-01-06 Sandbridge Technologies Inc Method and system for parallelization of pipelined computations
US8922565B2 (en) * 2007-11-30 2014-12-30 Qualcomm Incorporated System and method for using a secondary processor in a graphics system
US8081019B2 (en) * 2008-11-21 2011-12-20 Flextronics Ap, Llc Variable PFC and grid-tied bus voltage control
US10786736B2 (en) 2010-05-11 2020-09-29 Sony Interactive Entertainment LLC Placement of user information in a game space
US9342817B2 (en) 2011-07-07 2016-05-17 Sony Interactive Entertainment LLC Auto-creating groups for sharing photos
CN105023249B (en) * 2015-06-26 2017-11-17 清华大学深圳研究生院 Bloom image repair method and device based on light field
US11756254B2 (en) * 2020-12-08 2023-09-12 Nvidia Corporation Light importance caching using spatial hashing in real-time ray tracing applications

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5513582A (en) * 1978-07-13 1980-01-30 Sanyo Electric Co Ltd Color television receiver
US4241341A (en) * 1979-03-05 1980-12-23 Thorson Mark R Apparatus for scan conversion
CA1141468A (en) * 1979-06-15 1983-02-15 Martin J.P. Bolton Visual display apparatus
US4646075A (en) * 1983-11-03 1987-02-24 Robert Bosch Corporation System and method for a data processing pipeline
US4658247A (en) * 1984-07-30 1987-04-14 Cornell Research Foundation, Inc. Pipelined, line buffered real-time color graphics display system
JPH0746391B2 (en) * 1984-09-14 1995-05-17 株式会社日立製作所 Graphic seeding device
DE3689271T2 (en) * 1985-02-26 1994-02-24 Sony Corp Image display procedure.
US4737921A (en) * 1985-06-03 1988-04-12 Dynamic Digital Displays, Inc. Three dimensional medical image display system

Also Published As

Publication number Publication date
DE3853336T2 (en) 1995-09-28
DE3853336D1 (en) 1995-04-20
EP0314341A2 (en) 1989-05-03
EP0314341B1 (en) 1995-03-15
EP0314341A3 (en) 1991-07-24
JPH01163884A (en) 1989-06-28
JPH0731741B2 (en) 1995-04-10
US4866637A (en) 1989-09-12

Similar Documents

Publication Publication Date Title
CA1304824C (en) Pipelined lighting model processing system for a graphics workstation&#39;s shading function
US5995111A (en) Image processing apparatus and method
US5268995A (en) Method for executing graphics Z-compare and pixel merge instructions in a data processor
US7755626B2 (en) Cone-culled soft shadows
US6664963B1 (en) System, method and computer program product for programmable shading using pixel shaders
US6333747B1 (en) Image synthesizing system with texture mapping
US7061488B2 (en) Lighting and shadowing methods and arrangements for use in computer graphic simulations
JP5054729B2 (en) Lighting and shadow simulation of computer graphics / image generation equipment
US5185856A (en) Arithmetic and logic processing unit for computer graphics system
US6628290B1 (en) Graphics pipeline selectively providing multiple pixels or multiple textures
US6333744B1 (en) Graphics pipeline including combiner stages
US8648856B2 (en) Omnidirectional shadow texture mapping
US6549203B2 (en) Lighting and shadowing methods and arrangements for use in computer graphic simulations
US6525723B1 (en) Graphics system which renders samples into a sample buffer and generates pixels in response to stored samples at different rates
JP3759971B2 (en) How to shade a 3D image
US20080114826A1 (en) Single Precision Vector Dot Product with &#34;Word&#34; Vector Write Mask
US20090150648A1 (en) Vector Permute and Vector Register File Write Mask Instruction Variant State Extension for RISC Length Vector Instructions
US8169439B2 (en) Scalar precision float implementation on the “W” lane of vector unit
US20080114824A1 (en) Single Precision Vector Permute Immediate with &#34;Word&#34; Vector Write Mask
US20080079713A1 (en) Area Optimized Full Vector Width Vector Cross Product
US6806886B1 (en) System, method and article of manufacture for converting color data into floating point numbers in a computer graphics pipeline
US20080079712A1 (en) Dual Independent and Shared Resource Vector Execution Units With Shared Register File
US6489956B1 (en) Graphics system having a super-sampled sample buffer with generation of output pixels using selective adjustment of filtering for implementation of display effects
Eyles et al. Pixel-planes 4: A summary
US20090063608A1 (en) Full Vector Width Cross Product Using Recirculation for Area Optimization

Legal Events

Date Code Title Description
MKLA Lapsed
MKLA Lapsed

Effective date: 19970707