|Publication number||USRE42366 E1|
|Application number||US 10/781,265|
|Publication date||May 17, 2011|
|Filing date||Feb 19, 2004|
|Priority date||Dec 18, 1995|
|Also published as||US5963668, US6396952|
|Publication number||10781265, 781265, US RE42366 E1, US RE42366E1, US-E1-RE42366, USRE42366 E1, USRE42366E1|
|Inventors||Junji Horikawa, Takashi Totsuka|
|Original Assignee||Sony Corporation|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (45), Non-Patent Citations (13), Classifications (15), Legal Events (1)|
|External Links: USPTO, USPTO Assignment, Espacenet|
This application is a continuation of application Ser. No. 08/755,129, filed on Nov. 25, 1996, now U.S. Pat. No. 5,963,668.
1. Field of the Invention
The present invention relates to a method and apparatus for hierarchically approximating shape data with an image, in which the data amount is reduced by reducing the complexity of the shape of a geometric model which is used in generating CG (Computer Graphics), thereby enabling the CG to be drawn at a high rate of speed. The invention also relates to a method and apparatus for hierarchically approximating shape data with an image, which is suitable for use in a game using CG, VR (Virtual Reality), designing, and the like since a shape which was approximated so as not to give a sense of incongruity is changed.
2. Description of the Prior Art
When drawing using a model as part of computer graphics, the same model may be used repeatedly. For example, as shown in
However, when the observer pays no attention to the model because the model is minimized and looks smaller on the picture plane or the model is out of a target point of the picture plane, it is not always necessary to draw by using the model having a high degree of detail. That is, by using a similar model in which a degree of detail is decreased to a certain extent by using a method of reducing the number of vertices of the model, reducing the number of planes of a polygon, or the like, it can appear as if the same model is used.
Such an approximation of the model is useful for the drawing of the CG display as mentioned above. However, if the data amount of the model is simply reduced by approximating the details of the model, the observer feels incongruity when he sees the approximated model. If this sense of incongruity can be suppressed, requests for both of the drawing speed and the drawing quality can be satisfied. For this purpose, it is desirable to reduce the data amount in a manner such that a general characteristic portion of the model is left and the other portions are reduced. Hitherto, such an approximation of the model is often executed by the manual work of a designer, so that much expense and time are necessary for the above work.
A method of obtaining a more realistic image by adhering a two-dimensional image to a plane of a model as a drawing target is generally used. This is called a texture mapping, The image that is adhered in this instance is called a texture. When the approximation of the shape as mentioned above is executed to the model which was subjected to the texture mapping, it is necessary to also pay attention to the texture adhered to the model plane. That is, it is necessary to prevent a deterioration in the appearance of the model due to a deformation of the texture shape at the time of approximation and to prevent the occurrence of a problem such that the amount of work is increased since the texture must be again adhered to the approximated model.
In past studies, according to Francis J. M. Schmitt, Brian A. Barsky, and Wen-Hui Du, “An Adaptive Subdivision Method for Surface-Fitting from Sampled Data”, Computer Graphics, Vol. 20, No. 4, August, 1986, although the shape is approximated by adhering the Bezier patch to a three-dimensional shape, there is a problem in that a general polygon is not a target.
According to Greg Turk, “Re-Tiling Polygonal Surface”, Computer Graphics, Vol. 26, No. 2, July, 1992, a trial of hierarchically approximating a polygon model is executed. There is, however, a problem in that although the algorithm in the above paper can be applied to a round shape, it is not suitable for a square shape and a general shape is not a target. Further, it is not considered to approximate the shape on the basis of characteristic points of the object shape.
Further, according to Hugues Hoppe et al., “Mesh Optimization”, Computer Graphics Proceedings, Annual Conference Series, SIGGRAPH 1993, a model is approximated in a manner such that energy is introduced to an evaluation of the approximated model, and operations for removing the edge, dividing the patch, and swapping the edge are repeated so as to minimize the energy. According to the method of the paper, however, it is necessary to execute a long repetitive calculation until the minimum point of the energy is determined. In addition, a solving method such as a simulated annealing or the like is necessary in a manner similar to other energy minimizing problems so as not to reach a local minimum point. There is no guarantee that the energy minimum point is always visually the best point.
Further, in those papers, no consideration is made up to the texture adhered to the model upon approximation. Consequently, the method of approximating the model according to the methods in the papers has a problem in that double processes are required in which the texture is newly adhered to the approximated model after the approximation.
As mentioned above, the past studies have problems regarding the approximation of a model when a polygon is drawn. That is, the conventional method has problems such that application of the shape approximation is limited, a long calculation time is necessary for approximation, and the approximation in which required characteristic points are considered is not executed. The approximation of figure data to realize a switching of continuous layers, in which the sense of incongruity to be given to the observer at the time of the switching of the approximated model is considered, is not executed.
When the approximation is executed to the geometric model to which the texture is adhered, there is a problem in that a measure to prevent a quality deterioration after the approximation, by keeping the shape of the texture adhered to the model, is not taken. There is also a problem in that a measure to eliminate the necessity to newly adhere the texture after the approximation is not taken. Further, there is a problem that the approximation in which the existence of the texture itself is considered is not executed.
It is, therefore, an object of the invention to provide a method and apparatus for hierarchically approximating figure data with an image in the drawing of CG so that high-speed drawing is performed while maintaining a quality of the drawing.
It is another object of the invention to provide a method and apparatus for hierarchically approximating figure data with an image as if the approximation of a geometric model is performed in consideration of the existence of a texture itself.
According to the invention, in order to solve the above problems, there is provided a hierarchical approximating method of shape data for approximating shape data to data of a desired resolution, comprising the steps of: evaluating an importance of each of the edges which construct the shape data; removing an unnecessary edge on the basis of a result of the edge evaluation; and determining a vertex position after the unnecessary edge was removed.
According to the invention, in order to solve the above problems, there is provided a hierarchical approximating method of shape data with an image for approximating shape data to which image data was adhered to data of a desired resolution, comprising the steps of: determining which edge in the shape data should be removed upon approximation; determining a new vertex position in the shape data after the edge removal performed on the basis of the edge removal determination; and removing an unnecessary vertex in the image data adhered to the shape data in accordance with outputs from the edge removal determining step and the vertex movement determining step and moving a vertex on the image data in accordance with the new vertex position in the shape data.
According to the invention, in order to solve the above problems, there is provided an approximating apparatus for figure data for approximating shape data to that of a desired resolution, comprising: evaluating means for evaluating an importance of each of the edges which construct the shape data; edge removing means for removing an unnecessary edge on the basis of a result of the edge evaluation; and vertex position determining means for determining a vertex position after the unnecessary edge was removed.
According to the invention, in order to solve the above problems, there is provided a hierarchical approximating apparatus for figure data with image data for approximating shape data to which image data is adhered to data of a desired resolution, comprising: edge removal determining means for determining which edge in the shape data is removed upon approximation; vertex movement determining means for determining a new vertex position in the shape data after the edge removal; and image data removal and movement determining means for removing an unnecessary vertex in the image data adhered to the shape data in accordance with outputs from the edge removal determining means and the vertex movement determining means and for moving the vertex on the image data in accordance with the new vertex position in the shape data.
According to the invention as mentioned above, the importance of each of the edges of the shape data is evaluated, the unnecessary edge is removed on the basis of the evaluation, a new vertex after the edge removal is determined, and further, the vertex is moved on the image data in accordance with the new vertex position. Thus, the shape data can be approximated so that the change in shape is little while suppressing the deterioration of the image data adhered to the shape model.
The above and other objects and features of the present invention will become apparent from the following detailed description and the appended claims with reference to the accompanying drawings.
An embodiment of the invention will now be described hereinbelow with reference to the drawings.
As shown in
First, processes in the flowchart shown in
In the first step S1, original polygon data is inputted. The texture is adhered to each plane for the inputted polygon data. The input of the data and the adhesion of the texture are manually performed from the keyboard 1 or by a method whereby data which has been made in another place and stored in a floppy disk or an MO disk is read out by the FDD 2 or MO drive 3. The polygon data can be also inputted through a network such as the internet.
In step S2, each edge of the inputted polygon data is evaluated for performing the edge removal. In the edge evaluation in step S2, each edge of the inputted polygon data is converted into a numerical value by a method, which will be described below, and is set to an evaluation value. In step S3, the evaluation values of the edges obtained in step S2 are sorted and the edge having the minimum evaluation value is selected (i.e., identified). The processing routine advances to step S4. In step S4, the edge having the minimum evaluation value that was selected in step S3 is removed.
When the edge is removed in step S4, the processing routine advances to step S5. In step S5, the position of the vertex which remains after the edge was removed in step S4 is determined. In step S6, the texture portion which becomes unnecessary in association with the edge removal is removed and the positions of the remaining texture coordinates are determined.
Approximated polygon data that was approximated at a precision of one stage and was subjected to the texture mapping is obtained by the foregoing processes in steps S2 to S6. The edge removal, the determination of a new vertex, and the process of the texture in association with them are repeated by repeatedly executing the processes in steps S2 to S6. Consequently, the approximated polygon data, which was subjected to the texture mapping can be obtained (i.e., created) at a desired precision.
When the approximated polygon data that was subjected to the texture mapping at a desired precision in step S6 is obtained (step S7), the processing routine advances to step S8. The obtained approximated polygon data that was texture mapped is drawn on the display apparatus 8. The obtained approximated polygon data which was texture mapped can be also stored into an external memory apparatus such as a hard disk 6 or memory 7, a floppy disk inserted in the FDD 2, or an MO inserted in the MO drive 3. The derived data can be also supplied and stored to another computer system through the network.
The processes in the above flowchart are executed mainly by the CPU 4 in the hardware structure of FIG. 2. Instructions or the like which are necessary during the processes are sent from the input such as a keyboard 1 or the like to the CPU 4.
Processes regarding a model approximation will now be described. As mentioned above, the approximation of the polygon model is executed by repeating the edge removal. In this instance, small convex and concave components which do not contribute to the general shape of the model are judged and edges which should be preferentially removed are determined on the basis of the judgement judgment result. In order to select the edges which are preferentially removed, the extent to which the edges constructing the model contribute to the general shape, namely, the importance of each edge is evaluated and the removal is executed to remove the edge with the smallest evaluation value. In step S2, the importance of each edge is evaluated.
In order to select the edge which is suitable to be removed by obtaining the evaluation value, an evaluation function to evaluate the extent to which each of the edges constructing the polygon model contributes to the shape of the polygon model is introduced. The following equation (1) shows an example of the evaluation function.
The equation (1) is constructed by two terms. The first term Vi shows a volume amount which is changed when the edge as an evaluation target is removed. The volume amount here denotes a virtual volume of a shape specified by the shape data of the polygon. The second term Si shows a value obtained by multiplying the planes existing on both sides of the target edge with the length of the target edge. It denotes a change amount of the volume of the plane including only the target edge. Coefficients a and b are multiplied to the two terms. The user can select which one of the first term Vi and the second term Si is preferentially used by properly setting the values of the coefficients.
The first term Vi largely depends on the peripheral shape of the edge as an evaluation target. On the other hand, the second term Si depends on the length of the target edge and the area of planes existing on both sides of the target edge. In the case of a polygon model having a flat shape like a sheet of paper, when the edge e(v1 and v2) is removed, the change amount by the term Si is larger than that by the term Vi. In the polygon model constructed by planes in which all of them have similar shapes and areas, for example, in the model shown in
The value of the equation (1) is calculated with respect to each of the edges constructing the polygon model and the evaluation value for each edge is obtained. In step S3, the calculation values are sorted in accordance with the values and the edge having the minimum evaluation value is selected, thereby obtaining the edge whose contribution to the model shape when the edge is removed is the smallest.
When the importance of the edge is evaluated in step S2, the length of edge is considered. When the evaluation values are the same, the shorter edge can be also set as a target to be removed.
Although the local evaluation value in the polygon model is obtained by the equation (1), each edge can be also evaluated by a value obtained by adding the evaluation values of the peripheral edges to the evaluation value of a certain target edge. In this case, the evaluation can be performed not only with the peripheral shape of one edge but also with the shape or a wide range. When the area which the user wants to evaluate is wide as mentioned above, the calculation range of the equation (1) can be widened in accordance with such a wide area.
In addition to the evaluation value simply derived by the calculation of the equation (1), the user can give the evaluation value or can operate the evaluation value. Therefore, when there is a portion which the user wants to leave intact without approximation or a portion which he, contrarily, wants to approximate, the intention of the designer or operator can be reflected in the approximating process by designating such a portion. In this case, the evaluation value is determined by executing a weighted addition by giving a weight coefficient to each of the value operated by the user and the calculated evaluation value.
In this case, the approximation in which the intention of the designer is reflected can be performed by giving a weight coefficient, for example, by giving weight to the evaluation value designated by the user. On the contrary, when a large weight is given to the evaluation value obtained by the calculation of the equation (1), an accurate approximation can be performed by a quantitative evaluation of the volume change in shape. In this manner, the change in shape can be freely controlled by the weighting process.
When the evaluation values for the edges of the polygon data are obtained in step S2 as mentioned above, the obtained evaluation values are sorted and the edge having the minimum evaluation value is selected in step S3. When sorting the edges, for example, a quick sorting as a known technique can be used. Other sorting methods can be also obviously used. Since the sorting methods including the quick sorting are described in detail in “Algorithm Dictionary” published by Kyoritsu Publication Co., Ltd. or the like, the description is omitted here. The selected edge having the minimum evaluation value is removed in step S4.
Although the case where the edge having the minimum evaluation value is simply removed has been described here, the removing order of the edges or the edge which is not removed can be also arbitrarily designated. When the edge is not removed, there is no change in shape of such a portion. For example, in the case where it is desirable that the shape is not changed, like a portion in which two models are in contact each other, it is sufficient to set a portion where no edge is removed.
When the edge is removed in step S4, the vertices (v1 and v2 in this case) constructing the edge are lost. In step S5, therefore, a new vertex position in association with the edge removal is determined.
In this instance, the shape after the edge removal is changed depends on the position of the vertex v1 which remains.
Although the vertex v1 which is left and becomes a new vertex is arranged to the position where the volume change amounts on both sides of the edge are equal irrespective of the peripheral shape of the edge which is removed in step S5 in the above description, the invention is not limited to the example. For example, the vertex v′ can be also arranged at a position where the volume change upon edge removal is the minimum. As mentioned above, the method of arranging the vertex v′ to the position where the volume change amounts on both sides of the edge are equalized and the method of arranging the vertex v′ to the position where the volume change is the minimum can be selectively used in accordance with a desire of the user.
In consideration of the peripheral shape of the edge, when the shape has a concave or convex shape, the vertex v′ can be also arranged at a position where the volume change after the edge removal is the minimum. When the periphery has an S-character shape, the vertex v′ can be arranged at a position where the volume change amounts on both sides of the edge are equalized. In this case, the position of the vertex v′ is deviated to either one of the ends of the edge in the case of the concave or convex shape. In case of the S-character shape, the vertex v′ is arranged in the middle of the S character. Thus, both of an effect to suppress the volume change and an effect to absorb the continuous changes like an S character by the plane can be achieved.
For example, an area having a small S-character shape like a saw tooth can be approximated by one plane in a general shape. A portion having a large change except the S-character shape can be approximated by a shape which is closer to the original shape. In the approximation in which the shape has a priority, such a setting is also possible. The approximating methods can be selectively used in accordance with the intention of the user.
It is also possible not to change the vertex position remaining after the edge removal from the vertex position before the edge removal. That is, in the example shown in
When the edge is evaluated and removed and the new vertex in association with the edge removal is determined in the steps up to step S5, a process regarding the texture adhered to each plane of the polygon model is executed in step S6. FIGS. G
The vertex V6 v6 is removed by the approximation of the polygon model and the two vertices v3 and v6 in this model are integrated to one vertex V3 v3 ′. In association with it, by removing the edge e(v3, v6) comprising v3 and v6, triangular areas on both sides including the removed edge are lost. In this instance, unless the loss of those triangular areas is considered, the image data comprising the texture coordinates Vt3, Vt4, and Vt6 vt3 , vt 4 , and vt 6 and the image data comprising Vt3, vt5, and Vt6 vt3 , vt 5 , and vt 6 are lost.
As shown by the texture in the diagram on the right side in
In this example, the vertices v3 and v6 are integrated on the polygon model and the vertex v3 remains. The remaining vertex V3 v3 is set to a vertex V3′ v3 ′. The position of the vertex V3′ v3 ′ is arranged at a predetermined distribution ratio t on the coordinates between the edge e(v3, v6) comprising V3 v3 and v6 before approximation. In this case, the coordinates of the vertex v3′ can be calculated by ((1−t)×V3+t×V6). ((1−t)×v 3 +t×v 6). When 0≦t≦1, the distribution coefficient t exists on the edge straight line of the edge e(v3, v6) before approximation and, when t<0 or 1<t, t exists out of the edge e(v3, v6). By changing a value of t, therefore, a shape change amount after the model was approximated by the edge removal can be controlled.
As mentioned above, the vertices v3 and v6 are integrated on the polygon model and are set to the vertices v3′, and V3′ v3 ′ is arranged between the vertex v3 and the vertex v6. The texture coordinates vt3 and vt6 corresponding to those two vertices are, therefore, also integrated to the coordinates Vt3 vt3 after approximation and are set to coordinates vt3′. The coordinates vt3′ are arranged between the coordinates Vt3 vt3 and vt6 before approximation.
In this instance, when the position of the coordinates Vt3 vt3 of the texture data corresponding to the vertex V3 v3 on the polygon model is not changed in accordance with the change in model shape as mentioned above, for example, in the texture shown in
With respect to an original polygon model shown in
When the texture is adhered to the polygon model, there is a case where not only one texture but also a plurality of different textures are allocated to the model. In this case, a boundary in which the texture is switched from a certain texture to another texture exists.
In case of adhering the texture to the polygon model, as mentioned above, the texture is allocated to each vertex of the model. Even in the boundary of the texture, therefore, the boundary is allocated to each vertex constructing the edge of the model. Further, as mentioned above, the approximation of the model is performed by repeating the edge removal only a desired number of times. In this instance, if the texture area allocated to the edge as a target of the removal is in the texture, as shown in
However, when the area of the image allocated to the edge as a removal target exists just on the boundary of the image, the polygon model is approximated by the edge removal and since the vertex position is moved, a plurality of textures are mixed and the appearance of the texture is broken. To prevent this, it is necessary to make a discrimination so as not to break the image boundary at the time of the edge removal and to decide sizes of a change of the outline portion by the edge removal.
As shown in
In this case, since the outline portion of the face image has also been adhered to each of the vertices v3 to v6, as shown in
To prevent this, a removal evaluating function of the edge as a boundary portion of the texture is introduced and when the shape of the texture boundary is largely changed by the edge removal, it is necessary to use any one of the following methods. Namely, as a first method, the relevant edge is not removed. As a second method, although the edge is removed, a movement amount of the vertex position after the removal is adjusted. The following equation (2) is used as a removal evaluating function of each edge in this instance.
In the equation (2), E denotes the vector having the direction and length of the edge e, Ni indicates the normal vector of the edge, and Li the length of edge. A range of i corresponds to the whole edge of the boundary lines existing before and after the edge as a removal target. The equation (2) denotes an area change amount when the edge of the boundary portion is removed. Therefore, when the calculation value of the equation (2) is large, a change of the outline portion by the edge removal is large.
Namely, when the calculation value of the equation (2) is large, the area change in the outline portion of the texture increases, so that there is a fear of occurrence of the breakage of the texture shape. To prevent this, there is a method whereby the relevant edge is not removed like the foregoing first method. However, like the foregoing second method, there is also a method whereby the texture coordinates after the edge removal are moved within a range where the value of the equation (2) is smaller than the designated value, thereby consequently decreasing the change amount of the outline portion. By using the second method, the breakage of the texture after the approximation can be suppressed.
As mentioned above, the approximated polygon model to which the texture having a desired precision is adhered can be obtained. In this case, when the texture is adhered to the original model, there is no need to again adhere the texture to the model after completion of the approximation and the approximated model with the texture can be automatically obtained.
As mentioned above, the approximated model obtained by repeating the processes in steps S2 to S6 is stored in the external storing apparatus such as hard disk 6 or memory 7. However, when displaying in step S8, the approximated model stored in the external storing apparatus is read out, drawn, and displayed to the display apparatus 8. As already described in the foregoing prior art, in this display, for example, when the model is displayed as a small image on the picture plane because it appears at a remote location or when the observer fails to notice the model because it is out of the target point on the picture plane, the model is switched to the model of a layer that was approximated and the image is displayed.
Upon switching to the approximated model, if the model is suddenly switched to the model in which a degree of approximation largely differs, a sudden change occurs in the shape of the displayed model at a moment of the switching and a feeling of disorder is given to the observer.
To prevent that feeling of disorder, it is sufficient that a number of models whose approximation degrees are slightly changed are prepared and stored into the external storing apparatus and the display is performed while sequentially switching those models. In this case, however, since an amount of models to be stored increases, it is not efficient. Therefore, to realize a smooth continuous conversion even with a small number of models, it is sufficient to interpolate the model among the discrete layers and to obtain the model of the middle layer.
For example, in the example shown in
Such a forming method of the approximated model in the middle layer between the discrete layers has already been described in detail in Japanese Patent Application No. 6-248602 regarding the proposition of the present inventors.
In the example, the vertices v1, and v2 bounding the edge e(v1, v2) of the layer N are integrated to v1 in the layer N+1 and the deleted vertex v2 is integrated to v1. From the correspondence relation of the vertices, in the middle layer N′, the positions of vertices v1′ and V2′ v2 ′ bounding an edge e′(v1′, V2′) e′(v 1 ′, v 2′) corresponding to the edge e(v1, v2) of the layer N can be obtained by the linear interpolation between the layers N and N+1. Although the example in which one middle layer is obtained is shown here, a degree of linear interpolation is changed in accordance with a desired number of middle layers and a plurality of middle layers can be obtained. The formation of the approximated model of the middle layer can be performed in a real-time manner in accordance with a situation in which the model is displayed.
Although the case where the approximated model of the middle layer is formed and displayed in a real-time manner while displaying the model has been described here, the invention is not limited to such an example. For instance, it is also possible to practice the invention in a manner such that the approximated model of the middle layer is previously formed and stored in the external storing apparatus and the stored approximated model of the middle layer is read out at the time of the display.
Although the case where one edge is removed has been mentioned as an example here, since the edge removal is repeated a plurality of number of times in the approximation of the actual model, one vertex of a certain layer corresponds to a plurality of vertices of another layer which is closer to the original model. By using the correspondence relation of the vertices in those two layers as mentioned above, the vertices of the model can be made to correspond among all of the layers. The model of the middle layer is obtained on the basis of the correspondence relation of the vertices derived as mentioned above.
As mentioned above, since the coordinates of the image data in the texture are allocated to each vertex of each model, in a manner similar to the case of the vertices of such a model, the model to which the texture was adhered in the middle layer can be obtained by the interpolation of the texture coordinates vt1 and vt2 allocated to the vertices v1 and v2, respectively. By such a process, the models in a range from the original model to the most approximated model can be smoothly continuously obtained.
By the above processes, the discrete hierarchical approximated model can be obtained and the model of the middle layer can be also obtained. The approximated model obtained and stored as mentioned above is switched in accordance with the size, position, speed, and attention point of the viewer of the apparent model on the picture plane in the display apparatus 8 and is displayed in step S8.
As specifically shown in
Although the case where the texture image is adhered to the polygon model has been described above, the invention can be also obviously applied to the case where the texture image is not adhered. In this case, step S6 can be omitted in the flowchart shown in
As described above, according to the invention, when image data (texture) is adhered to geometric data such as polygon data which is used in the CG, the model can be approximated to a desired degree of details while preventing the breakage of the texture shape or an apparent deterioration of the quality.
According to the invention, therefore, there is an effect such that the geometric model which is used in the CG can be approximated in a state in which the texture is adhered. There is also an effect such that not only is the model approximated but also the breakage of the appearance of the texture in the approximation result can be suppressed.
By using the geometric model approximated by the method based on the invention, in the drawing of the CG, there is an effect such that a request for drawing of at a high speed and at a high picture quality can both be satisfied.
Further, according to the invention, an importance degree of each edge constructing the geometric model which is used for the CG can be evaluated by an evaluation value. There is an effect such that the geometric model can be approximated by preferentially removing the edge of a low evaluation value of the edge.
According to the invention, the position of the vertex remaining after the edge was removed can be determined so as to suppress a change in general shape. Thus, there is an effect such that a feeling of disorder upon looking when drawing by using the approximated model can be suppressed.
According to the invention, figure data which is used in the CG can be approximated by a plurality of resolutions. There is an effect such that by using the figure data derived by the invention, both of the goals of drawing at a high speed and drawing with a high quality can be satisfied.
The present invention is not limited to the foregoing embodiments but many modifications and variations are possible within the spirit and scope of the appended claims of the invention.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US4152766||Feb 8, 1978||May 1, 1979||The Singer Company||Variable resolution for real-time simulation of a polygon face object system|
|US4600919||Aug 3, 1982||Jul 15, 1986||New York Institute Of Technology||Three dimensional animation|
|US4694407||Jun 11, 1985||Sep 15, 1987||Rca Corporation||Fractal generation, as for video graphic displays|
|US4783829||Feb 22, 1984||Nov 8, 1988||Hitachi, Ltd.||Pattern recognition apparatus|
|US4941193 *||Oct 2, 1987||Jul 10, 1990||Iterated Systems, Inc.||Methods and apparatus for image compression by iterated function system|
|US4969204||Nov 29, 1989||Nov 6, 1990||Eastman Kodak Company||Hybrid residual-based hierarchical storage and display method for high resolution digital images in a multiuse environment|
|US5029228||Dec 22, 1988||Jul 2, 1991||Mitsubishi Denki Kabushiki Kaisha||Image data filing system|
|US5159512 *||Jul 5, 1991||Oct 27, 1992||International Business Machines Corporation||Construction of minkowski sums and derivatives morphological combinations of arbitrary polyhedra in cad/cam systems|
|US5276786||Jul 1, 1992||Jan 4, 1994||Quantel Limited||Video graphics systems separately processing an area of the picture before blending the processed area into the original picture|
|US5341466||May 9, 1991||Aug 23, 1994||New York University||Fractal computer user centerface with zooming capability|
|US5373375||Dec 21, 1990||Dec 13, 1994||Eastman Kodak Company||Metric conversion mechanism for digital images in a hierarchical, multi-resolution, multi-use environment|
|US5384904||Dec 8, 1992||Jan 24, 1995||Intel Corporation||Image scaling using real scale factors|
|US5448686||Jan 2, 1992||Sep 5, 1995||International Business Machines Corporation||Multi-resolution graphic representation employing at least one simplified model for interactive visualization applications|
|US5471568 *||Jun 30, 1993||Nov 28, 1995||Taligent, Inc.||Object-oriented apparatus and method for scan line conversion of graphic edges|
|US5490239||Sep 8, 1994||Feb 6, 1996||University Corporation For Atmospheric Research||Virtual reality imaging system|
|US5506947||Sep 22, 1994||Apr 9, 1996||International Business Machines Corporation||Curve and surface smoothing without shrinkage|
|US5590248 *||Feb 24, 1995||Dec 31, 1996||General Electric Company||Method for reducing the complexity of a polygonal mesh|
|US5611036 *||Sep 23, 1994||Mar 11, 1997||Cambridge Animation Systems Limited||Apparatus and method for defining the form and attributes of an object in an image|
|US5613051||Dec 21, 1994||Mar 18, 1997||Harris Corp.||Remote image exploitation display system and method|
|US5621827||Jun 28, 1994||Apr 15, 1997||Canon Kabushiki Kaisha||Image processing method and apparatus for obtaining object data to reconstruct the original image|
|US5689577||Oct 14, 1994||Nov 18, 1997||Picker International, Inc.||Procedure for the simplification of triangular surface meshes for more efficient processing|
|US5751852 *||Apr 29, 1996||May 12, 1998||Xerox Corporation||Image structure map data structure for spatially indexing an imgage|
|US5761332||Mar 11, 1996||Jun 2, 1998||U.S. Philips Corporation||Method of reconstructing the surface of an object|
|US5774130||Aug 31, 1995||Jun 30, 1998||Sony Corporation||Computer animation generator creating hierarchies of models for rapid display|
|US5796400||Aug 7, 1995||Aug 18, 1998||Silicon Graphics, Incorporated||Volume-based free form deformation weighting|
|US5809322 *||Dec 9, 1994||Sep 15, 1998||Associative Computing Ltd.||Apparatus and method for signal processing|
|US5929860||Feb 7, 1997||Jul 27, 1999||Microsoft Corporation||Mesh simplification and construction of progressive meshes|
|US5963209||Jan 11, 1996||Oct 5, 1999||Microsoft Corporation||Encoding and progressive transmission of progressive meshes|
|US5963668 *||Nov 25, 1996||Oct 5, 1999||Sony Corporation||Computer animation generator|
|US5966133||Feb 7, 1997||Oct 12, 1999||Microsoft Corporation||Geomorphs and variable resolution control of progressive meshes|
|US6046744||Feb 7, 1997||Apr 4, 2000||Microsoft Corporation||Selective refinement of progressive meshes|
|EP0156343A2 *||Mar 25, 1985||Oct 2, 1985||Hitachi, Ltd.||Partial pattern matching method and apparatus|
|EP0734163A2 *||Apr 13, 1995||Sep 25, 1996||Daewoo Electronics Co., Ltd||A contour approximation apparatus for representing a contour of an object|
|JPH0415772A||Title not available|
|JPH0652270A||Title not available|
|JPH01205277A||Title not available|
|JPH05250445A||Title not available|
|JPH05266212A||Title not available|
|JPH05266213A||Title not available|
|JPH05290145A||Title not available|
|JPH06231276A||Title not available|
|JPH06251126A||Title not available|
|JPH08272957A *||Title not available|
|JPH09198524A||Title not available|
|JPS63118890A *||Title not available|
|1||An Adaptive Subdivision Method for Surface-Fitting from Sampled Data; Schmitt et al.; SIGGRAPH '86; vol. 20, No. 4, 1986; pp. 179-188.|
|2||An Adaptive Subdivision Method for Surface—Fitting from Sampled Data; Schmitt et al.; SIGGRAPH '86; vol. 20, No. 4, 1986; pp. 179-188.|
|3||Hiroyuki Yamamoto, et al., "The Delaunay Triangulation for Accurate Three-Dimensional Graphic Model", (IEICE) Transactions, D-11, vol. J78-D-11 No. 5, pp. 745-753, May 1995.|
|4||Hoppe, Hugues et al, "Mesh Optimization," Computer Graphics (SIGGRAPH 1993 Proceedings), ACM, Aug. 1993, pp. 19-26.|
|5||Hoppe, Hugues et al., "Mesh Optimization," Computer Graphics (SIGGRAPH '93 Proceedings), 1993, pp. 19-26.|
|6||Hoppe, Hugues, "Progressive Meshes," Computer Graphics (SIGGRAPH '96 Proceedings), 1996, pp. 99-108.|
|7||Japanese Office Action dated Jul. 10, 2007 for corresponding patent application No. 2005-299609.|
|8||Japanese Office Action issued Mar. 10, 2009 for corresponding Japanese Application No. 2005-299609/2008-6349.|
|9||Mesh Optimization; Computer Graphics Proceedings, Annual Conference Series, 1993 Hoppe et al.; pp. 19-26.|
|10||Re-Tiling Polygonal Surfaces; Computer Graphics, 26, Jul. 2, 1992; Greg Turk; pp. 55-64.|
|11||Shinji Uchiyama, et al., "Hierarchical Shape Representation with Adaptive Meshes from a Range Image", Media Technology Laboratory, Canon Inc., pp. 351-361, vol. 36 No. 2, Feb. 1995.|
|12||Turk, Greg, "Re-Tilting Polygonal Surfaces," ACM SIGRAPH Computer Graphics, ACM, Jul. 1992, vol. 26, No. 2, pp. 55-64.|
|13||Wentao Zheng, et al., "Surface Representation Based on Invariant Characteristics", Technical Report of the Institute of Television, vol. 18, No. 15, pp. 31-38, Feb. 25, 1994.|
|U.S. Classification||382/203, 382/266, 382/173, 345/420|
|International Classification||G06K9/46, G06T11/20, G06T11/00, G06T15/04, G06T17/00, G06T15/00|
|Cooperative Classification||G06T2210/36, G06T17/20, G06T17/205|
|European Classification||G06T17/20, G06T17/20R|