Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS4709231 A
Publication typeGrant
Application numberUS 06/766,941
Publication dateNov 24, 1987
Filing dateAug 19, 1985
Priority dateSep 14, 1984
Fee statusPaid
Publication number06766941, 766941, US 4709231 A, US 4709231A, US-A-4709231, US4709231 A, US4709231A
InventorsToru Sakaibara, Shigeo Tsujioka, Toshinori Kajiura, Toshihisa Aoshima, Motonobu Tonomura
Original AssigneeHitachi, Ltd.
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Shading apparatus for displaying three dimensional objects
US 4709231 A
Abstract
An apparatus for shading a polyhedron at high speed is disclosed which includes the combination of a polygon-scan line conversion processor and an inner product interpolation processor for obtaining a pair of inner products of vectors indicative of a relation among the direction of a normal, the direction of a light source and the direction of a view point each viewed at a point within a polygon having a plurality of vertices, on the basis of the position of said point in the polygon and the direction of a normal at each of the vertices, a table searched on the basis of the inner products of vectors and holding a series of brightness data which have previously been calculated for a series of values of each of the inner products of vectors, a buffer for storing the result of table search for the above table, and a D/A conversion circuit for converting the result of table search into a signal which is used as a brightness control signal in a display device.
Images(5)
Previous page
Next page
Claims(7)
We claim:
1. A pattern shading apparatus comprising:
means for obtaining a first signal indicative of a relation among the direction of a normal, the direction of a light source and the direction of a view point each viewed at a point within a polygon having a plurality of vertices, on the basis of the position of said point in said polygon and the direction of a normal at each of said vertices;
a table for holding a series of brightness data, said brightness data being previously calculated for a series of values of said first signal, said table being searched on the basis of said first signal; and
means of converting the result obtained by the table search into a signal used in display means;
wherein said table includes a first table for holding a series of brightness data based upon a reflection of ambient light and a diffuse reflection of light from the light source and a second table for holding a series of brightness data based upon a specular reflection of light from the light source, and said first and second tables are searched on the basis of said first signal, and wherein said table further includes adding means for adding the result of search for said first table and the result of search for said second table to each other.
2. A pattern shading apparatus according to claim 1, wherein said conversion means includes a frame memory having storage locations corresponding to dots on a display screen of said display means, and the result obtained by the table search is stored in a corresponding one of said storage locations.
3. A pattern shading apparatus comprising:
means for obtaining a first signal indicative of a relation along the direction of a normal, the direction of a light source and the direction of a view point each viewed at a point within a polygon having a plurality of vertices, on the basis of the position of said point in said polygon and the direction of a normal at each of said vertices;
a table for holding a series of brightness data, said brightness data being previously calculated for a series of values of said first signal, said table being searched on the basis of said first signal; and
means of converting the result obtained by the table search into a signal used in display means;
wherein said means for obtaining said first signal includes first means for determining the direction of a normal at a first point on a first side of said polygon on the basis of the direction of a normal at each of vertices serving as both ends of said first side and for determining the direction of a normal at a second point on a second side of said polygon on the basis of the direction of a normal at each of vertices serving as both ends of said second side, and second means for obtaining said first signal indicative of a relative among the direction of a normal, the direction of the light source and the direction of the view point each viewed at a third point on a straight line connecting said first point and said second point, on the basis of the position of said third point on said straight line, the direction of a normal at said first point, and the direction of a normal at said second point.
4. A pattern shading apparatus according to claim 3, wherein said second means obtains said first signal by regarding each of the direction of the light source and the direction of the view point as constant at every point in said polygon.
5. A pattern shading apparatus according to claim 3, wherein second means includes third means for assigning part of an angle between the direction of a normal at said first point and the direction of a normal at said second point to said third point in accordance of the position of said third position on said straight line connecting said first point and said second point, and fourth means for obtaining said first signal expressed by a relation among the assigned angle, the direction of the light source, and the direction of the view point.
6. A pattern shading apparatus according to claim 5, wherein said fourth means includes a third table searched on the basis of said assigned angle and an angle between the direction of the light source and the direction of a normal at said first point and holding a series of values of a second signal with respect to the direction of the light source, and a fourth table searched on the basis of said assigned angle and an angle between the direction of the view point and the direction of a normal at said first point and holding a series of values of a third signal with respect to the direction of the view point.
7. A pattern shading apparatus comprising:
first means for determining the direction of a normal at a first point on a first side of a polygon having a plurality of vertices on the basis of the direction of a normal at each of vertices serving as both ends of said first side and for determining the direction of a normal at a second point on a second side of said polygon on the basis of the direction of a normal at each of vertices serving as both ends of said second side, to generate first, second and third parameters on the basis of the direction of a normal at each of said first and second points, said first means including means for obtaining a first angle between the direction of a light source and the direction of a normal to a plane containing the direction of a normal at said first point and the direction of a normal at said second point and a second angle between a component of the direction of the light source parallel to said plane and the direction of a normal at said first point to deliver said first and second angles as said first parameter, means for obtaining a third angle between the direction of a normal to said plane and the direction of the bisector of an angle formed between the direction of the light source and the direction of a view point and a fourth angle between the projection of said bisector onto said plane and the direction of a normal at said first point to deliver said third and fourth angles as said second parameter, and means for dividing a fifth angle between the direction of a normal at said first point and the direction of a normal at said second point by the distance between said first point and said second point to deliver a value obtained by the division as said third parameter;
second means applied with said first, second and third parameters for delivering a pair of inner products of vectors indicative of a relation among the direction of a normal, the direction of the light source and the direction of the view point each viewed at each of points on a straight line connecting said first point and said second point, said second means including means for determining a first value at each point on said straight line connecting said first point and said second point on the basis of said first parameter and an accumulated value of said third parameter, a first table searched on the basis of said first value and holding a series of values of an inner product of vectors which is concerned with the direction of the light source, means for determining a second value at each of points on said straight line connecting said first point and said second point on the basis of said second parameter and an accumulated value of said third parameter, a second table searched on the basis of said second value and holding a series of values of another inner product of vectors which is concerned with the direction of the view point, and means for adding up the results of table search which is made for said first and second tables on the basis of said first and second values, to deliver a value indicative of that relation among the direction of a normal, the direction of the light source and the direction of the view point which is obtained at each point on said straight line; and
means for determining the brightness of each point on said straight line connecting said first point and said second point on the basis of said inner products of vectors, to display the brightness on a display device.
Description
BACKGROUND OF THE INVENTION

The present invention relates to an image processing apparatus, and more particularly to the shading of a displayed body in the processing of three-dimensional image data.

It has been well known that when a three-dimensional body is displayed on a display device of a pattern processing apparatus, such as a CRT display, the displayed body is shaded on the basis of a light reflection model, to display the body just as it would appear in reality. Such a shading technique is described, for example, in the following publications.

Bui Tuong Phong, "Illumination for Computer Generated Pictures", Communications of ACM, 18(6), June 1975, pages 311 to 317.

J. D. Foly and A. Van Dam, "Fundamentals of Interactive Computer Graphics", Addison Wesely Publishing Company.

James F. Blinn, "Models of Light Reflection for Computer Synthesized Pictures", SIGGRAPH '77 Proceeding.

Although details of the shading technique are described in the above publications (particularly in the book by J. D. Foly and A. Van Dam), the principle of shading based upon a reflection model will be summarized below. FIG. 2 shows a reflection model used by J. F. Blinn. In FIG. 2, reference symbol N designates a unit vector normal to a reflecting surface (namely, the unit normal vector of the reflecting surface), L a unit vector starting from a reflecting point P and directed to a light source, V a unit vector starting from the reflecting point P and directed to a view point, and H a unit vector in the direction of the sum vector of the vectors L and V. Accordingly, the unit vector H is the bisector of an angle between the vector L and the vector V. According to this model, the light intensity I at the view point is expressed by the following equation:

I=Ia Ka +Ip {Kd (NĚL)+Ks (NĚH)n }                                    (1)

when Ia indicates the intensity of ambient light, Ka a reflection coefficient for ambient light, Ip the intensity of light from a light source, Kd a diffuse reflection coefficient, Ks a specular reflection coefficient, and n a numeral value varying in a range from 10 to 1,000, depending upon the roughness of the reflecting surface. Incidentally, a mark "." in the equation (1) indicates the inner product of vectors.

A three-dimensional body which is bounded by a curved surface and is to be displayed, is first approximated by a polyhedron having a plurality of polygonal facets each called a patch surface, and then a unit normal vector at each vertex of the polyhedron is determined. FIG. 3 shows a polyhedron, by which a to-be-displayed body is approximated. Referring to FIG. 3, a polyhedron 1 has patch surfaces 11 to 13, and the unit normal vectors of the patch surfaces 11, 12 and 13 are expressed by N11, N12 and N13, respectively. A unit normal vector Nv at a vertex 14 is obtained by forming a unit vector along the average vector of the unit normal vectors N11, N12 and N13 of the patch surfaces 11, 12 and 13 which share the vertex 14. After the unit normal vectors at all vertices of the polyhedron have been determined in the above-mentioned manner, a unit normal vector at each point on an original curved surface corresponding to each patch surface is determined by the interpolation of a unit normal vector among vertices which define the patch surface. Then, the intensity of light reflected from each point on the surface of the to-be-displayed body toward the view point, that is, the brightness of each dot (namely, each pixel) on a display screen can be calculated from the equation (1). Thus, the surface of the displayed body can be shaded. Although the vector H is used in the above shading method, the vector V may be used instead of the vector H. In this case, substantially the same result as in the above method is obtained, but the calculation of light intensity at the view point becomes complicated.

In a conventional shading apparatus, the inner products (NĚL) and (NĚH) have been calculated for each dot, and then the equation (1) has been calculated. Accordingly, it takes a lot of time to complete a displayed pattern.

SUMMARY OF THE INVENTION

A main object of the present invention is to provide a pattern shading apparatus which can shorten the processing time required for shading the displayed surface of a three-dimensional body.

In order to attain the above object, according to an aspect of the present invention, the brightness corresponding to various values of each of the inner products (NĚL) and (NĚH) is previously calculated and stored in tables. For example, the quantities Ia and Ip in the equation (1) can be considered to be constant for one patch surface. Accordingly, the equation (1) can be divided into two parts, one of which includes the first and second terms of the equation (1) and the other part includes only the third term. Thus, each part of the equation (1) can be treated as the function of one inner product of vectors. Accordingly, the equation (1) can be readily calculated by using two tables, one of which stores values of the inner product (NĚL) and the other of which stores values of the inner product (NĚH).

Further, according to another aspect of the present invention, in order to carry out the interpolation of inner product of vectors between two points, each vector is represented by a polar coordinate system. The vector L and the vector V (or H) can be considered to be constant for one patch surface. Therefore, each of the inner products (NĚL) and (NĚH) necessary for calculating the brightness of each dot is a function of only the vector N. Further, when an appropriate polar coordinate system is used, each of the above inner products can be expressed by a trigonometric function of a coordinate angle θ of the vector N. Accordingly, instead of calculating the inner products (NĚL) and (NĚH) after having determined the vector N at each point by interpolation, the inner products themselves can be directly determined by interpolation, using the coordinate angle θ as a parameter.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a block diagram showing an embodiment of a pattern shading apparatus according to the present invention.

FIG. 2 is a schematic diagram showing a light reflection model.

FIG. 3 is a schematic diagram showing a polyhedron, by which a to-be-displayed body is approximated.

FIG. 4 is a schematic diagram for explaining the polygon-scan line conversion.

FIGS. 5 and 6 show the polar coordinate representation of vectors.

FIG. 7 shows the relation between component values of a vector X in a polar coordinate system and a quadrant where the projection of the vector X onto a NL -J plane is present.

FIG. 8 shows formulae which are used for calculating a coordinate angle θx of the vector X in accordance with a quadrant where the above projection is present.

FIG. 9 is a block diagram showing the inner product interpolation processor of FIG. 1.

DESCRIPTION OF THE PREFERRED EMBODIMENT

FIG. 1 shows the whole construction of an embodiment of a pattern shading apparatus according to the present invention.

Referring to FIG. 1, a coordinate transformation processor 2 receives position coordinates of each vertex of a polyhedron, by which a to-be-displayed body is approximated, from a host computer (not shown), to perform coordinate transformation, such as the rotation or parallel displacement of coordinate axes, if necessary. Since such coordinate transformation has no direct connection with the present invention and is explained in detail in the previously-referred book by J. D. Foly and A. Van Dam, the explanation of the coordinate transformation processor 2 will be omited. A polygon-scan line conversion processor 3 determines the positional coordinates, unit normal vector and polar coordinate representation of the inner products (NĚL) and (NĚH) for the starting and end points of each scan line, on the basis of coordinate information on the polyhedron, and then produces parameters which are to be supplied to inner product interpolation processors 41 and 42, on the basis of the above data. The function and operation of the polygon-scan line conversion processor 3 will be explained later in more detail. Although details of the inner product interpolation processor 41 will be described later, the processor 41 receives the above parameters, to calculate the inner product (NĚL) for each dot on each scan line by interpolation. similarly, the inner product interpolation processor 42 calculates the inner product (NĚH) for each dot by interpolation. Brightness tables 51 to 56 hold values of the brightness Iad =Ia Ka +Ip Kd (NĚL) corresponding to values of the inner product (NĚL) and based upon the reflection of ambient light and the diffuse reflection of light from the light source, and values of the brightness Is =Ip Ks (NĚH)n corresponding to values of the inner product (NĚH) and based upon the specular reflection of light from the light source. Further, the tables 51 and 52 corespond to the red component of reflected light, the tables 53 and 54 correspond to the green component, and the tables 55 and 56 correspond to the blue component. The brightness information read out of the tables 51 to 56 is sent to adders 61 to 63, and values of Iad and Is which are read out of the above tables for the same color component are added to each other by a corresponding one of the adders 61 to 63. The outputs of the adders 61 to 63 are stored in frame buffers 71 to 73 dot by dot, and then respective contents of the buffers 71 to 73 are sent to a CRT display 9 through D/A converters 81 to 83, to control the brightness of the display screen of the CRT, thereby shading a displayed body.

FIGS. 4 to 8 serve to explain the processing carried out by the polygon-scan line conversion processor 3. The processor 3 determines a side vector indicative of each side of a patch surface, on the basis of position coordinates of each vertex of a polyhedron, determines the unit normal vector of the patch surface as the vector product of two side vectors, and then calculates a unit normal vector at each vertex in the previously-mentioned manner. Thereafter, the processor 3 determines the starting and end points of each scan line in the same manner as in an ordinary polygon-scan line conversion method, and then calculates the unit normal vectors at points on each scan line successively.

In more detail, referring to FIG. 4 which shows a patch surface 30, a unit normal vector NL at a starting point S of a scan line 31 is calculated, by interpolation, from unit normal vectors N1 and N2 at two vertices which serve as respective ends of a side 32 containing the point S. Similarly, a unit normal vector NR at end point T of the scan line 31 is calculated, by interpolation, from unit normal vectors N3 and N4 at two vertices which serve as respective ends of a side 33 containing the point T. Further, in order to make it easy to calculate the inner products (NĚL) and (NĚH) at each point on a scan line, a special coordinate system, that is, a polar coordinate system shown in FIG. 5 is used. In this polar coordinate system, the vector NL, the vector product G of the vectors NL and NR, and the vector product J of the vectors G and NL are used as coordinate axes. According to this coordinate system, a unit normal vector N at each dot position on the scan line is considered to change from the vector NL to the vector NR on the NL -J plane through a shorter path. In other words, an angle θ between the vector NL and the vector N is changed from 0 to θR (where -π<θR <π) in a direction in which the angle between the vector NL and the vector NR is less than π. Incidentally, in the above coordinate system, a given unit vector X can be represented by two polar coordinates, that is, an angle θY between the vector X and the vector G and an angle θX between the projection of the vector X onto the NL -J plane and the vector NL. In FIG. 5, angles φH and θH are polar coordinates of the vector H, and angles φL and θL are polar coordinates of the vector L.

Now, detailed explanation will be made of how to determine the polar coordinates φX and θX of a given unit vector X. First, the angle φX between the vector G and the vector X is given by the following equation:

φX =cos-1 (GĚX)

where 0≦φX ≦π.

Since the coordinate angle θ of the vector N, as mentioned above, changes from zero to θR through a shorter path, the angle θX can take a value satisfying a formula -π<θx <π. For example, a vector X1 shown in FIG. 6 is considered to be rotated from the vector NL in a positive direction through an angle θ1 and a vector X2 is considered to be rotated from the NL in a negative direction through an angle θ2.

FIG. 7 shows the relation between a quadrant where the projection of the vector X onto the NL -J plane is present, and values of inner products (XĚNL) and (XĚJ), and FIG. 8 shows those formulae for calculating an angle θX which are used in accordance with a quadrant where the projection of the vector X onto the NL -J plane is present, and depending upon whether or not the absolute value of the inner product (XĚNL) is smaller than the absolute value of the inner product (XĚJ). In other words, the angle θX can be calculated by one of the above formulae. In a case where the value of at least one of the inner products (XĚNL) and (XĚJ) is equal to zero, the angle θX can be determined directly from the relation shown in FIG. 7. In the above explanation, a relation NL ≠NR is assumed. However, in a case where NL =NR, that is, NL ÎNR =0, a given vector perpendicular to the vector NL can be used as the vector G. In a case where NL =-NR, unit normal vectors at opposite ends of a patch surface are anti-parallel. Such a case scarcely happens, and can be avoided by increasing the patch surfaces of the polyhedron.

The angles φH, φL, θH and θL shown in FIG. 5 can be determined in the above-mentioned manner. Further, the angle θ for each dot on a scan line is increased from 0 to θR in such a manner that the angle θ is incremented by a small angle Δθ at intervals of one dot. Accordingly, the angle Δθ can be obtained by dividing the angle θR by a numerical value equal to the difference between the number of dots on a scan line and one (1).

In brief, the polygon-scan line conversion processor 3 calculates the above-mentioned angles φH, φL, θH, θL and Δθ, which are supplied to the inner product interpolation processors 41 and 42.

Next, explanation will be made of the inner product interpolation processors 41 and 42. Referring back to FIG. 5, the inner product (NĚL) can be expressed by the following equation:

(NĚL)=cos θĚsin φL Ěcos θL +sin θĚsin φL Ěsin θL =1/2{ sin (φLL -θ)+sin (φLL +θ)}                    (2)

Similarly, the inner product (NĚH) can be expressed by the following equation:

(NĚH)=1/2{ sin (φHH -θ)+sin (φHH +θ)}                    (3)

Usually, the distance between a patch surface and each of the light source and the view point is far larger, as compared with dimensions of the patch surface, and hence the vectors L and H are considered to be constant for all points on one patch surface. Accordingly, each of the equations (2) and (3) is a function of only the angle θ, and the angle θ is successively incremented by Δθ, to change from 0 to θR.

FIG. 9 shows the construction of the inner product interpolation processor 41 for calculating the inner product (NĚL). Referring to FIG. 9, a latch circuit 411 holds an angle Δθ which is supplied from the polygon-scan line conversion processor 3, and supplies a value -Δθ and another value Δθ to one input terminal of an adder 412 and one input terminal of an adder 413, respectively. The other input terminal of the adder 412 receives the output of a latch circuit 414. A value φLL from the polygon-scan line conversion processor 3 is applied, as an initial value, to the latch circuit 414. The other input terminal of the adder 413 receives the output of a latch circuit 415. A value φLL from the polygon-scan line conversion processor 3 is applied, as an initial value, to the latch circuit 415. The outputs of the adders 412 and 413 are applied to the latch circuits 414 and 415, respectively. Accordingly, the contents of the latch circuits 414 and 415 are successively decremented and incremented by Δθ, respectively, for a series of dots on a scan line. Thus, the latch circuits 414 and 415 can hold values (φLL -θ) and (φLL +θ) for a dot now scanned, respectively. Sine tables 416 and 417 are searched on the basis of the outputs of the latch circuits 414 and 415, and deliver sine values of the above outputs. The sine values from the tables 416 and 417 are added to each other by an adder 418. A shifter 419 performs a shifting operation for the output of the adder 418 so as to deliver one-half of this output, namely, a value given by the equation (2).

The inner product interpolation processor 42 for the inner product (NĚH) is used for calculating the equation (3). The processor 43 has the same construction as shown in FIG. 9, excepting that the angles φH and θH are used in place of the angles θL and θL.

The output of the inner product interpolation processor 41 is used for searching the brightness tables 51, 53 and 55, and the output of the inner product interpolation processor 42 is used for searching the brightness tables 52, 54 and 56. The equation (1) can be divided into the following two equations:

Iad =Ia Ka +Ip Kd (NĚL)  (4)

Is =Ip Ks (NĚH)n              (5)

The equation (4) indicates the light intensity based upon the reflection of ambient light and the diffuse reflection of light from the light source, and the equation (5) indicates the light intensity based upon the specular reflection of light from the light source. That is, the equations (4) and (5) correspond to two components of the brightness of a displayed body. The values of the equations (4) and (5) with respect to each of red-, green- and blue-components are previously calculated for those values of the inner products (NĚL) and (NĚH) which are given at an appropriate interval, using reflection coefficients of a to-be-displayed body with respect to the above-mentioned three kinds of reflection, the intensity of ambient light, the intensity of light from the light source, and the value of n. The values of the equation (4) for each of the red-, green- and blue-components are stored in the tables 51, 53 and 55, respectively, and the values of the equation (5) for each of the red-, green- and blue-components are stored in the tables 52, 54 and 56, respectively. For example, in a case where the values of each of the inner products (NĚL) and (NĚH) are given at intervals of 1/256, the number of entries included in each of the tables 51 to 56 is made equal to 257. That is, the value of the equation (4) or (5) obtained when the value of the inner product (NĚL) or (NĚH) is zero, is stored in the first entry, the value of the equation (4) or (5) corresponding to the value of the inner product equal to 1/256 is stored in the second entry, the value of the equation (4) or (5) corresponding to the value of the inner product equal to 255/256 is stored in the two hundred fifty sixth entry, and the value of the equation (4) or (5) corresponding to the value of the inner product equal to 1 (one) is stored in the two hundred fifty seventh entry. The values of each of the equations (4) and (5) may be calculated by the host computer or may be calculated by a processor attached to the display device.

According to the present embodiment, the brightness tables are used, and hence the calculation of the equation (1) can be achieved only by one table searching process and one adding process after the values of the inner products (NĚL) and (NĚH) have been determined. While, according to a conventional method, even when the values of Ia Ka, Ip Kd and Ip Ks have previously been calculated, two multiplying operations and two adding operations are required for calculating the equation (1). Further, even when a time required for calculating the values stored in the tables 51 to 56 is taken into consideration, the present invention is superior to the conventional method, since a displayed body generally includes a large number of dots having the same values of the inner products (NĚL) and (NĚH).

Further, according to the present embodiment, the calculation of the inner products (NĚL) and (NĚH) at each dot includes two adding processes and one table searching process. While, according to the conventional method, six multiplying operations and four adding operations are required for calculating the above inner products.

As has been explained in the foregoing, a pattern shading apparatus according to the present invention includes a brightness table. Thus, a time necessary for obtaining the brightness of each dot on the basis of the values of the inner products is greatly reduced. Further, according to the present invention, a time required for calculating the inner products by interpolation can be greatly shortened by using a polar coordinate system.

Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US4475104 *Jan 17, 1983Oct 2, 1984Lexidata CorporationThree-dimensional display system
US4586038 *Dec 12, 1983Apr 29, 1986General Electric CompanyTrue-perspective texture/shading processor
US4615013 *Aug 2, 1983Sep 30, 1986The Singer CompanyMethod and apparatus for texture generation
Non-Patent Citations
Reference
1Bui Tuong Phong, "Illumination for Computer Generated Pictures", Communication of ACM, 18(6), Jun. 1975, pp. 311 to 317.
2 *Bui Tuong Phong, Illumination for Computer Generated Pictures , Communication of ACM, 18(6), Jun. 1975, pp. 311 to 317.
3J. D. Foly and A. Van Dam, "Fundamentals of Interactive Computer Graphics", Addison Wesely Publishing Company.
4 *J. D. Foly and A. Van Dam, Fundamentals of Interactive Computer Graphics , Addison Wesely Publishing Company.
5James F. Blinr, "Models of Light Reflection for Computer Synthesized Pictures", SIGGRAPH '77 Proceeding.
6 *James F. Blinr, Models of Light Reflection for Computer Synthesized Pictures , SIGGRAPH 77 Proceeding.
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US4751660 *Jun 26, 1986Jun 14, 1988Sony CorporationDetermining orientation of transformed image
US4805117 *Sep 26, 1986Feb 14, 1989International Business Machines CorporationMethod for controlling concatenation of transformation matrices in a graphics display system
US4831528 *Nov 9, 1987May 16, 1989General Electric CompanyApparatus and method for improvement of 3D images derived from tomographic data
US4837447 *May 6, 1986Jun 6, 1989Research Triangle Institute, Inc.Rasterization system for converting polygonal pattern data into a bit-map
US4866637 *Oct 30, 1987Sep 12, 1989International Business Machines CorporationPipelined lighting model processing system for a graphics workstation's shading function
US4879652 *Sep 4, 1987Nov 7, 1989General Electric CompanyMethod for producing three-dimensional images from nuclear data
US4885703 *Nov 4, 1987Dec 5, 1989Schlumberger Systems, Inc.3-D graphics display system using triangle processor pipeline
US4888711 *Nov 16, 1987Dec 19, 1989General Electric CompanyImage interpretation method and apparatus using faces for constraint satisfaction
US4888712 *Nov 4, 1987Dec 19, 1989Schlumberger Systems, Inc.Guardband clipping method and apparatus for 3-D graphics display system
US4899295 *May 20, 1987Feb 6, 1990Quantel LimitedVideo signal processing
US4901064 *Nov 4, 1987Feb 13, 1990Schlumberger Technologies, Inc.Normal vector shading for 3-D graphics display system
US4930091 *Nov 4, 1987May 29, 1990Schlumberger Systems, Inc.Triangle classification setup method and apparatus for 3-D graphics display system
US4943938 *Oct 14, 1986Jul 24, 1990Hitachi, Ltd.System for displaying shaded image of three-dimensional object
US4945500 *Nov 20, 1989Jul 31, 1990Schlumberger Technologies, Inc.Triangle processor for 3-D graphics display system
US4958300 *Oct 20, 1988Sep 18, 1990Daikin Industries, Ltd.Polygon filling control apparatus
US4965844 *Mar 31, 1986Oct 23, 1990Sony CorporationMethod and system for image transformation
US5060172 *Jul 6, 1989Oct 22, 1991Digital Equipment CorporationMethod and apparatus for displaying smooth-shaded objects
US5067098 *Mar 31, 1989Nov 19, 1991The Ohio State University Research FoundationSlope-aspect color shading for parametric surfaces
US5222202 *Oct 11, 1990Jun 22, 1993International Business Machines CorporationMethod and apparatus for visualization of iso-valued surfaces
US5222203 *May 4, 1992Jun 22, 1993Daikin Industries, Ltd.Method and apparatus for displaying translucent surface
US5222204 *Mar 14, 1990Jun 22, 1993Hewlett-Packard CompanyPixel interpolation in perspective space
US5237656 *Apr 30, 1987Aug 17, 1993Fanuc Ltd.Image processing apparatus using look-up tables
US5245700 *Nov 21, 1989Sep 14, 1993International Business Machines CorporationAdjustment of z-buffer values for lines on the surface of a polygon
US5268996 *Dec 20, 1990Dec 7, 1993General Electric CompanyComputer image generation method for determination of total pixel illumination due to plural light sources
US5278917 *May 13, 1991Jan 11, 1994Mitsubishi Denki Kabushiki KaishaLathe cutting simulation method
US5295235 *Feb 14, 1992Mar 15, 1994Steve NewmanPolygon engine for updating computer graphic display employing compressed bit map data
US5305430 *Dec 26, 1990Apr 19, 1994Xerox CorporationObject-local sampling histories for efficient path tracing
US5313568 *Jul 6, 1993May 17, 1994Hewlett-Packard CompanyThree dimensional computer graphics employing ray tracing to compute form factors in radiosity
US5357599 *Jul 30, 1992Oct 18, 1994International Business Machines CorporationMethod and apparatus for rendering polygons
US5369737 *Aug 15, 1990Nov 29, 1994Digital Equipment CorporationNormalization of vectors associated with a display pixels of computer generated images
US5384719 *Jun 3, 1991Jan 24, 1995Rediffusion Simulation LimitedImage generator for simulating the illumination effects of a vehicle-mounted light source on an image displayed on a screen
US5446479 *Aug 4, 1992Aug 29, 1995Texas Instruments IncorporatedMulti-dimensional array video processor system
US5452406 *May 14, 1993Sep 19, 1995Microsoft CorporationMethod and system for scalable borders that provide an appearance of depth
US5546327 *Jul 11, 1994Aug 13, 1996Matsushita Electric Industrial Co., Ltd.Apparatus for calculating geometrical view factor
US5563989 *Sep 30, 1993Oct 8, 1996Canon Kabushiki KaishaApparatus and method for performing lighting calculations for surfaces of three-dimensional objects
US5590267 *Jun 5, 1995Dec 31, 1996Microsoft CorporationMethod and system for scalable borders that provide an appearance of depth
US5630039 *Feb 28, 1994May 13, 1997International Business Machines CorporationTessellating complex in polygons in modeling coordinates
US5680153 *Aug 22, 1994Oct 21, 1997Canon Kabushiki KaishaImage Processing apparatus
US5742292 *Mar 24, 1997Apr 21, 1998Kabushiki Kaisha ToshibaSystem and method for realistically displaying images indicating the effects of lighting on an object in three dimensional space
US5742749 *Feb 20, 1996Apr 21, 1998Silicon Graphics, Inc.Method and apparatus for shadow generation through depth mapping
US5761068 *Feb 29, 1996Jun 2, 1998Mitsubishi Denki Kabushiki KaishaCAD/CAM apparatus and method providing improved display of machined workpiece
US5771975 *Feb 14, 1997Jun 30, 1998Northrop Grumman CorporationComposite cylinder termination
US5798765 *Aug 8, 1997Aug 25, 1998Motorola, Inc.Three dimensional light intensity display map
US5805782 *Jan 23, 1996Sep 8, 1998Silicon Graphics, Inc.Method and apparatus for projective texture mapping rendered from arbitrarily positioned and oriented light source
US5813467 *Feb 14, 1997Sep 29, 1998Northrop Grumman CorporationComposite cylinder termination formed using snap ring
US5870096 *Jul 11, 1994Feb 9, 1999Hitachi, Ltd.Method and apparatus for displaying images
US5905503 *Sep 3, 1996May 18, 1999U.S. Philips CorporationRendering an image using lookup tables giving illumination values for each light source by direction and distance
US5936629 *Nov 20, 1996Aug 10, 1999International Business Machines CorporationAccelerated single source 3D lighting mechanism
US5966454 *Sep 14, 1995Oct 12, 1999Bentley Mills, Inc.Methods and systems for manipulation of images of floor coverings or other fabrics
US5974189 *May 24, 1993Oct 26, 1999Eastman Kodak CompanyMethod and apparatus for modifying electronic image data
US6005969 *Oct 15, 1997Dec 21, 1999Interface, Inc.Methods and systems for manipulation of images of floor coverings or other fabrics
US6031542 *Feb 11, 1997Feb 29, 2000Gmd - Forschungszentrum Informationstechnik GmbhImage processing method and arrangement for the display of reflective objects
US6081274 *Aug 28, 1997Jun 27, 2000Ricoh Company, Ltd.Shading processing device
US6151026 *Jul 23, 1999Nov 21, 2000Sega Enterprises, Ltd.Image processing apparatus and image processing method
US6175367 *Apr 23, 1997Jan 16, 2001Siligon Graphics, Inc.Method and system for real time illumination of computer generated images
US6226007May 21, 1999May 1, 2001Sun Microsystems, Inc.Method and apparatus for modeling specular reflection
US6433782 *Feb 27, 1996Aug 13, 2002Hitachi, Ltd.Data processor apparatus and shading apparatus
US6545677Apr 30, 2001Apr 8, 2003Sun Microsystems, Inc.Method and apparatus for modeling specular reflection
US6552726 *Jul 17, 1998Apr 22, 2003Intel CorporationSystem and method for fast phong shading
US6567083Sep 25, 1997May 20, 2003Microsoft CorporationMethod, system, and computer program product for providing illumination in computer graphics shading and animation
US6654490 *Aug 27, 2001Nov 25, 2003Limbic Systems, Inc.Method for conducting analysis of two-dimensional images
US6720974Feb 8, 2002Apr 13, 2004Sony CorporationArithmetic unit and arithmetic processing method
US6806875Jun 21, 2002Oct 19, 2004Renesas Technology Corp.Data processing apparatus and shading apparatus
US6819805 *May 2, 2001Nov 16, 2004Agilent Technologies, Inc.Method and apparatus for brightness equalization of images taken with point source illumination
US7006685Nov 3, 2003Feb 28, 2006Lumeniq, Inc.Method for conducting analysis of two-dimensional images
US7034827 *Jul 17, 1998Apr 25, 2006Intel CorporationExtension of fast phong shading technique for bump mapping
US7050054May 16, 2001May 23, 2006Ngrain (Canada) CorporationMethod, apparatus, signals and codes for establishing and using a data structure for storing voxel information
US7064756Sep 23, 2004Jun 20, 2006Renesas Technology CorporationData processing apparatus and shading apparatus
US7068829Feb 4, 2004Jun 27, 2006Lumeniq, Inc.Method and apparatus for imaging samples
US7079138Jul 28, 2004Jul 18, 2006Sony Computer Entertainment America Inc.Method for computing the intensity of specularly reflected light
US7116806Oct 12, 2004Oct 3, 2006Lumeniq, Inc.Systems and methods relating to AFIS recognition, extraction, and 3-D analysis strategies
US7356171Sep 26, 2006Apr 8, 2008Lumeniq, Inc.Systems and methods relating to AFIS recognition, extraction, and 3-D analysis strategies
US7786993Sep 7, 2005Aug 31, 2010Sony Computer Entertainment America LlcEnvironment mapping
US8133115Oct 22, 2003Mar 13, 2012Sony Computer Entertainment America LlcSystem and method for recording and displaying a graphical path in a video game
US8204272Jun 17, 2011Jun 19, 2012Sony Computer Entertainment Inc.Lighting control of a user environment via a display device
US8243089Feb 1, 2011Aug 14, 2012Sony Computer Entertainment Inc.Implementing lighting control of a user environment
US8284310Apr 5, 2011Oct 9, 2012Sony Computer Entertainment America LlcDelay matching in audio/video systems
US8289325Oct 7, 2008Oct 16, 2012Sony Computer Entertainment America LlcMulti-pass shading
US8294713 *Mar 23, 2009Oct 23, 2012Adobe Systems IncorporatedMethod and apparatus for illuminating objects in 3-D computer graphics
US8355022Nov 25, 2008Jan 15, 2013Sony Computer Entertainment America LlcMethod and apparatus for aggregating light sources per-vertex in computer graphics
US20100128038 *Nov 25, 2008May 27, 2010Sony Computer Entertainment America Inc.Method and apparatus for interpolating color and direction as one entity in computer graphics
DE19606357A1 *Feb 12, 1996Aug 14, 1997Gmd GmbhBildverarbeitungsverfahren zur Darstellung von spiegelnden Objekten und zugeh÷rige Vorrichtung
EP0251800A2 *Jul 2, 1987Jan 7, 1988Hewlett-Packard CompanyMethod and apparatus for deriving radiation images using a light buffer
EP0327002A2 *Jan 30, 1989Aug 9, 1989Kabushiki Kaisha ToshibaApparatus and method for generating high-quality pattern
EP0590980A2 *Sep 30, 1993Apr 6, 1994Canon Kabushiki KaishaProcessing image data
EP0974935A1 *Mar 31, 1998Jan 26, 2000Sega Enterprises, Ltd.Spotlight characteristic forming method and image processor using the same
EP1233617A2 *Feb 7, 2002Aug 21, 2002Sony CorporationArithmetic unit and arithmetic processing method
WO1999016021A1 *Sep 25, 1998Apr 1, 1999Silicon Graphics IncMethod, system, and computer program product for providing illumination in computer graphics shading and animation
WO2000072271A1 *May 12, 2000Nov 30, 2000Sun Microsystems IncMethod and apparatus for modeling specular reflection
WO2002007088A2 *Jul 10, 2001Jan 24, 2002Paul A HalmshawApparatus and method for diffuse and specular lighting of voxel data
Classifications
U.S. Classification345/426
International ClassificationG06T15/80, G06F3/153, G09G5/36, G06T1/00, G09G5/02
Cooperative ClassificationG06T15/506
European ClassificationG06T15/50M
Legal Events
DateCodeEventDescription
Apr 29, 1999FPAYFee payment
Year of fee payment: 12
Apr 3, 1995FPAYFee payment
Year of fee payment: 8
Apr 1, 1991FPAYFee payment
Year of fee payment: 4
Aug 19, 1985ASAssignment
Owner name: HITACHI, LTD., 6, KANDA SURUGADAI 4-CHOME, CHIYODA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST.;ASSIGNORS:SAKAIBARA, TORU;TSUJIOKA, SHIGEO;KAJIURA, TOSHINORI;ANDOTHERS;REEL/FRAME:004446/0042
Effective date: 19850806